'No converter registered for type Deedle.Frame' using F# R type provider and ggplot2 - r

I'm working on my first attempt to use R's GGPlot2 via the F# R type provider.
Here's my code:
let (++) (plot1:RDotNet.SymbolicExpression) (plot2:RDotNet.SymbolicExpression) =
R.``+``(plot1, plot2)
let ChartGgPlot2 (prices : Prices) =
try
let fileName = makeFile ".png"
let priceSeries = prices.Prices |> Seq.map (fun p -> p.Date, p.Close) |> series
let dataFrame = Deedle.Frame.ofRecords priceSeries
R.png(filename=fileName, height=200, width=300, bg="white") |> ignore
R.ggplot(
namedParams[
"data", box dataFrame;
"mapping", box (
R.aes__string(x="Date", y="Close"))])
++ R.geom__point() |> ignore
R.dev_off() |> ignore
fileName |> Choice.succeed
with
| e -> Choice.fail e.Message
p.Date is a System.DateTime and p.Close is a double.
At runtime I get this exception at the point of calling R.ggplot:
No converter registered for type Deedle.Frame`2[[System.DateTime,
mscorlib, Version=4.0.0.0, Culture=neutral,
PublicKeyToken=b77a5c561934e089],[System.String, mscorlib,
Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] or
any of its base types
I've tried the solution suggested suggested here (copying two DLLs): Deedle frame to R but that didn't make a difference.
I should also say that my usage of series and Frame.ofRecords is pretty much guesswork at this point.
Many thanks.
Edit:
It's a compiled .NET 4.6 project with RProvider (1.1.20) and Deedle.RPlugin (1.2.5) added via Nuget.
ggplot2 works correctly from RGui.

Tomas's comment about config files and probing locations wasn't the answer - but it clued me in to what actually was the answer.
I needed to use Nuget to add references to Deedle.RPlugin, not only to the assembly that was doing the R calls to render a chart, but also to my 'main' assembly that references the charting assembly.
I don't know if this is an inherent limitation in the way the build system interacts with the type provider. But for now I'm very happy to have a workaround.
(For teaching purposes it would be great to know if there could be a long term fix.)
Huge thanks to those that replied.

It's quite possible something is off with your FSLab and/or Rprovider install. This for example should work:
#load #"..\..\FSLAB\packages\FsLab\FsLab.fsx"
open System
open Deedle
open FSharp.Charting
open System
open RDotNet
open RProvider
open RProvider.graphics
open RProvider.stats
open RProvider.datasets
open RProvider.ggplot2
type DtPx = {
Dt:System.DateTime
Px:float
}
let rnd = System.Random()
let nextPx() = rnd.NextDouble() - 0.5
let nextDay i = DateTime.Today.AddDays(float i)
let data = List.init 100 (fun i -> {Dt=nextDay i;Px = nextPx()})
let df = Frame.ofRecords data
let mtc = R.mtcars.GetValue<Frame<string, string>>()
let (++) (plot1:RDotNet.SymbolicExpression) (plot2:RDotNet.SymbolicExpression) =
R.``+``(plot1, plot2)
R.ggplot(
namedParams[
"data", box mtc;
"mapping", box (
R.aes__string(x="disp", y="drat"))])
++ R.geom__point()
R.ggplot(
namedParams[
"data", box df;
"mapping", box (
R.aes__string(x="Dt", y="Px"))])
++ R.geom__point()
You can use this in an fsx file, but also in a compiled .exe to save the output to png. In that case you will have to reference DynamicInterop, RDotNet, Rprovider, etc as well.
References:
This is for an empty project nugetting Deedle and RProvider related packages.

Related

How to open an SQLite database readonly in Julia?

I'd like to read my Safari history database from a Julia script (Mac OS X).
I have a command line script that works:
sqlite3 -readonly ~/Library/Safari/History.db 'SELECT v.title, i.url FROM history_items i, history_visits v WHERE i.url LIKE "%en.wikipedia.org%" AND i.id=v.history_item AND v.title LIKE "%- Wikipedia%" GROUP BY v.title ORDER BY v.visit_time'
... but trying it in Julia (in Juno / Atom) gives me a permission error
db = SQLite.DB("/Users/grimxn/Library/Safari/History.db")
sql = """
SELECT v.title, i.url, v.visit_time
FROM history_items i, history_visits v
WHERE i.url LIKE "%en.wikipedia.org%"
AND i.id=v.history_item
AND v.title LIKE "%- Wikipedia%"
GROUP BY v.title
ORDER BY v.visit_time
"""
result = DBInterface.execute(db, sql) |> DataFrame
(rows, cols) = size(result)
println("Result has $(rows) rows")
println("Earliest: $(result[1,1])")
println("Latest: $(result[rows,1])")
ERROR: LoadError: SQLite.SQLiteException("unable to open database file")
Now, when I copy the database to my home directory, and swap
db = SQLite.DB("/Users/grimxn/Library/Safari/History.db")
to
db = SQLite.DB("/Users/grimxn/History.db")
everything works, so I guess it is that the Julia / Juno process has only got read permissions, but is accessing the db read/write.
How do I attach to the database as readonly in Julia?
Theoretically, use a URI connection string: file:foo.db?mode=ro.
This is documented in the SQLite manual.
Practically, it appears the current version of the SQLite.jl package does not support URIs, and neither does it support flags that could be passed along to sqlite3_open_v2().
Leaving this answer for reference just in case the Julia package fixes this some day.
Jaen's answer was correct, and also correctly predicted that the mode=ro flag would be supported. It is now supported, and so the following will work (and does as of today):
julia> using SQLite
julia> db = SQLite.DB("file:/path/to/db.sqlite?mode=ro")
SQLite.DB("file:/path/to/db.sqlite?mode=ro")

Clean3.0 get directory contents

I am using Cleanide for Clean3.0 programming language.
What I am trying to do is to implement a function that receive name of a directory in my system, and return a list of all the files in that directory.
I don't know if the defintion of such function needs to be like File -> [string] or maybe something else, even that directory is a file maybe this is not the developers of Clean meant...
Thank a lot!
This functionality is not available in the StdEnv environment, but there are two libraries that can help with this:
The Directory library contains a module Directory which has a function getDirectoryContents :: !Path !*env -> (!(!DirError, [DirEntry]), !*env) | FileSystem env.
The Platform library contains a module System.Directory which has a function readDirectory :: !FilePath !*w -> (!MaybeOSError [FilePath], !*w).
In both cases the first argument is a path to the directory and the second argument is the *World, which is the typical way of Clean to perform impure operations (see chapter 9 of the language report).
Code examples
With Directory:
import Directory
Start w
# (dir,w) = getDirectoryContents (RelativePath []) w
= dir
With Platform:
import System.Directory
Start w
# (dir,w) = readDirectory "." w
= dir

CA-SCM/Harvest: get package ids during the promote

I want to retrieve the list of packages during the promote, e.g. when promote from DEV to QA, and get the list of file inside the package. What's these two commands?
Are you using promote process from workbench.
Consider using system variables as post linked process in the promote process
[package] [version]
I am assuming you are executing promote process on package group
This will provide the package(s) list and the versions in these packages.
If you need more details ,please reach out to us in CA harvest communities where visibility is high
https://communities.ca.com/community/ca-harvest
Regards,
Balakrishna
When promoting from dev to qa use the post link process
for example:
scriptName "[project]" "[state]"
In the server put the script (including the select as following )
select distinct c.PACKAGENAME , e.ITEMNAME, g.USERNAME, d.MAPPEDVERSION VERSION, f.PATHFULLNAME
from HARSTATE a, HARENVIRONMENT b, HARPACKAGE c, HARVERSIONs d, HARITEMs e, HARPATHFULLNAME f, HARUSER g
where b.ENVOBJID = a.ENVOBJID
and a.STATEOBJID = c.STATEOBJID
and b.ENVIRONMENTNAME = '${Project}'
and a.STATENAME = '${state}'
and c.PACKAGEOBJID = d.PACKAGEOBJID
and d.ITEMOBJID = e.ITEMOBJID
and e.PArentobjid = f.itemOBJID and e.itemtype <> 0
and g.USROBJID=c.CREATORID
and c.packagename != 'BASE'
order by c.packagename , f.pathfullname

loading RProvider in F#

I'm still a noob with F#, and I don't understand all the syntax and logic for loading and using packages.
For example, i would like to use (Blue Mountain's) RProvider.
http://bluemountaincapital.github.io/FSharpRProvider/index.html
Using VS2015, in my current solution, I've installed the package with the PM console and Install-Package RProvider
I modified a bit the RProvider.fsx because I've got newer versions of R.NET Community
#nowarn "211"
// Standard NuGet or Paket location
#I "."
#I "lib/net40"
// Standard NuGet locations for R.NET
#I "../R.NET.Community.1.6.4/lib/net40"
#I "../R.NET.Community.FSharp.0.1.9/lib/net40"
// Standard Paket locations for R.NET
#I "../R.NET.Community/lib/net40"
#I "../R.NET.Community.FSharp.1.6.4/lib/net40"
// Try various folders that people might like
#I "bin"
#I "../bin"
#I "../../bin"
#I "lib"
#I "../packages"
// Reference RProvider and RDotNet
#r "RDotNet.dll"
#r "RDotNet.FSharp.dll"
#r "RProvider.dll"
#r "RProvider.Runtime.dll"
open RProvider
do fsi.AddPrinter(fun (synexpr:RDotNet.SymbolicExpression) -> synexpr.Print())
Now my questions are
1) how to load a package (RProvider) from F# interactive ?
well actually i managed to do it this way
For example the RProvider.fsx file is in the path
C:\Users\Fagui\Documents\GitHub\Learning Fsharp\Algo Stanford\packages\RProvider.1.1.15\RProvider.fsx
what i did is
#I #"C:\Users\Fagui\Documents\GitHub\Learning Fsharp\Algo Stanford";;
#load "packages\RProvider.1.1.15\RProvider.fsx";;
and it works :-)
but can I avoid writing the whole path ?
2) In VS2015 if I want to include it in a solution...
in the solution explorer i have included the RProvider.fsx file (below AssemblyInfo.fs, App.config and packages.config come after, is this right ?)
and last the program itself Rtype.fs
I'm trying to reproduce the example from
http://bluemountaincapital.github.io/FSharpRProvider/Statistics-QuickStart.html
open System
open *RDotNet* // the namespace or module 'RDotNet' is not defined
open *RProvider*
open *RProvider*.graphics
open RProvider.stats
// let x = System.Environment.CurrentDirectory
// val x : string
printfn "hello world"
Console.ReadKey() |> ignore
// Random number generator
let rng = Random()
let rand () = rng.NextDouble()
// Generate fake X1 and X2
let X1s = [ for i in 0 .. 9 -> 10. * rand () ]
let X2s = [ for i in 0 .. 9 -> 5. * rand () ]
// Build Ys, following the "true" model
let Ys = [ for i in 0 .. 9 -> 5. + 3. * X1s.[i] - 2. * X2s.[i] + rand () ]
let dataset =
namedParams [
"Y", box Ys;
"X1", box X1s;
"X2", box X2s; ]
|> R.data_frame
let result = R.lm(formula = "Y~X1+X2", data = dataset)
let coefficients = result.AsList().["coefficients"].AsNumeric()
let residuals = result.AsList().["residuals"].AsNumeric()
let summary = R.summary(result)
*summary.AsList().["r.squared"].AsNumeric()
R.plot result*
//this expression should have type 'unit' but has type 'NumericVector'...
I'm getting some warnings/errors by Intellisense although the compiler managed a build.
When executing the exe, it looks like the windows screen is busy, i manage to see some graphs, but they look like they have got nothing to do with what Rtype.fs is saying...
thanks for helping !
EDIT
First of all, I would not recommend using a different version of R.NET than the one that RProvider installs automatically as a dependency. The loading is a bit fragile and it might break things.
1) Regarding the path, you should be able to pass relative path to #load, so just dropping the #I from your script should do the trick.
2) When referencing a dependency from a project (rather than from a script file), you need to add a dependency to the project references. In Visual Studio, this is done by right click on the "References" in your project and using "Add Reference". For type providers, you also need to click "Enable" when the reference is loaded.

Can I use sbt's `apiMappings` setting for managed dependencies?

I'd like the ScalaDoc I generate with sbt to link to external libraries, and in sbt 0.13 we have autoAPIMappings which is supposed to add these links for libraries that declare their apiURL. In practice though, none of the libraries I use provide this in their pom/ivy metadata, and I suspect some of these libraries will never do so.
The apiMappings setting is supposed to help with just that, but it is typed as Map[File, URL] and hence geared towards setting doc urls for unmanaged dependencies. Managed dependencies are declared as instances of sbt.ModuleID and cannot be inserted directly in that map.
Can I somehow populate the apiMappings setting with something that will associate an URL with a managed dependency ?
A related question is: does sbt provide an idiomatic way of getting a File from a ModuleID? I guess I could try to evaluate some classpaths and get back Files to try and map them to ModuleIDs but I hope there is something simpler.
Note: this is related to https://stackoverflow.com/questions/18747265/sbt-scaladoc-configuration-for-the-standard-library/18747266, but that question differs by linking to the scaladoc for the standard library, for which there is a well known File scalaInstance.value.libraryJar, which is not the case in this instance.
I managed to get this working for referencing scalaz and play by doing the following:
apiMappings ++= {
val cp: Seq[Attributed[File]] = (fullClasspath in Compile).value
def findManagedDependency(organization: String, name: String): File = {
( for {
entry <- cp
module <- entry.get(moduleID.key)
if module.organization == organization
if module.name.startsWith(name)
jarFile = entry.data
} yield jarFile
).head
}
Map(
findManagedDependency("org.scalaz", "scalaz-core") -> url("https://scalazproject.ci.cloudbees.com/job/nightly_2.10/ws/target/scala-2.10/unidoc/")
, findManagedDependency("com.typesafe.play", "play-json") -> url("http://www.playframework.com/documentation/2.2.1/api/scala/")
)
}
YMMV of course.
The accepted answer is good, but it'll fail when assumptions about exact project dependencies don't hold. Here's a variation that might prove useful:
apiMappings ++= {
def mappingsFor(organization: String, names: List[String], location: String, revision: (String) => String = identity): Seq[(File, URL)] =
for {
entry: Attributed[File] <- (fullClasspath in Compile).value
module: ModuleID <- entry.get(moduleID.key)
if module.organization == organization
if names.exists(module.name.startsWith)
} yield entry.data -> url(location.format(revision(module.revision)))
val mappings: Seq[(File, URL)] =
mappingsFor("org.scala-lang", List("scala-library"), "http://scala-lang.org/api/%s/") ++
mappingsFor("com.typesafe.akka", List("akka-actor"), "http://doc.akka.io/api/akka/%s/") ++
mappingsFor("com.typesafe.play", List("play-iteratees", "play-json"), "http://playframework.com/documentation/%s/api/scala/index.html", _.replaceAll("[\\d]$", "x"))
mappings.toMap
}
(Including scala-library here is redundant, but useful for illustration purposes.)
If you perform mappings foreach println, you'll get output like (note that I don't have Akka in my dependencies):
(/Users/michaelahlers/.ivy2/cache/org.scala-lang/scala-library/jars/scala-library-2.11.7.jar,http://scala-lang.org/api/2.11.7/)
(/Users/michaelahlers/.ivy2/cache/com.typesafe.play/play-iteratees_2.11/jars/play-iteratees_2.11-2.4.6.jar,http://playframework.com/documentation/2.4.x/api/scala/)
(/Users/michaelahlers/.ivy2/cache/com.typesafe.play/play-json_2.11/jars/play-json_2.11-2.4.6.jar,http://playframework.com/documentation/2.4.x/api/scala/)
This approach:
Allows for none or many matches to the module identifier.
Concisely supports multiple modules to link the same documentation.
Or, with Nil provided to names, all modules for an organization.
Defers to the module as the version authority.
But lets you map over versions as needed.
As in the case with Play's libraries, where x is used for the patch number.
Those improvements allow you to create a separate SBT file (call it scaladocMappings.sbt) that can be maintained in a single location and easily copy and pasted into any project.
Alternatively to my last suggestion, the sbt-api-mappings plugin by ThoughtWorks shows a lot of promise. Long term, that's a far more sustainable route than each project maintaining its own set of mappings.

Resources