Load (import) an existing non-sample CVAT Project to fiftyone as Dataset - cvat

I am trying
import fiftyone as fo
dataset = fo.load_dataset("import_test")
the project which exists and have such project name, I've created in CVAT before.
Such call was used in https://githubhelp.com/voxel51/fiftyone/issues/1611
but I am getting
ValueError: Dataset 'import_test' not found
In the documentation:
https://voxel51.com/docs/fiftyone/integrations/cvat.html
examples and other questions:
https://colab.research.google.com/github/voxel51/fiftyone-examples/blob/master/examples/pytorch_detection_training.ipynb#scrollTo=5crNDNsRWdPT
Uploading large dataset from FiftyOne to CVAT
https://medium.com/voxel51/loading-open-images-v6-and-custom-datasets-with-fiftyone-18b5334851c3
there are only usage of sample fiftyone dataet:
dataset = foz.load_zoo_dataset("...")
which works correctly for me as CVAT server connection check, but don't suit my work needs.
In
https://towardsdatascience.com/tools-to-annotate-and-improve-computer-vision-datasets-f9b99cdb0e04
using local machine stored dataset
dataset = fo.Dataset.from_dir("...")
Can I load already created project initially created in CVAT from server,
what argument is supposed to be used except it's CVAT's name of project?
Is it possible or it has to be initially fiftyone dataset?

Yes, you can use the fiftyone.utils.cvat.import_annotations() method to import labels that are already in a CVAT project or task into a FiftyOne Dataset.
Note that in order to use fo.load_dataset(), the dataset needs to already exist in FiftyOne. You can initialize an empty dataset like so as shown in the import annotations example:
dataset = fo.Dataset("my-dataset-name")
Then, you can call import_annotations(), providing a project name and optionally a data_path and export_media=True to download all of the media from your project to a local directory as well as all of the labels in your project, then import them into the dataset you just created.
dataset = fo.Dataset("my-dataset-name")
fouc.import_annotations(
dataset,
project_name=project_name,
data_path="/tmp/cvat_import",
download_media=True,
)
If your media already exists in disk, then see the linked example for how to provide a data_map mapping the CVAT filename to filepath of the media on the local disk.

Related

How to transfer my files to R Projects, and then to GitHub?

I have 3 r scripts;
data1.r
data2.r
graph1.r
the two data files, run some math and generate 2 separate data files, which I save in my working directory. I then call these two files in graph1.r and use it to plot the data.
How can I organise and create an R project which has;
these two data files - data1.r and data2.r
another file which calls these files (graph1.r)
Output of graph1.r
I would then like to share all of this on GitHub (I know how to do this part).
Edit -
Here is the data1 script
df1 <- data.frame(x = seq(1,100,1), y=rnorm(100))
save(df1, file = "data1.Rda")
Here is the data2 script
df2 <- data.frame(x = seq(1,100,1), y=rnorm(100))
save(df2, file = "data2.Rda")
Here is the graph1 script
load(file = "data1.Rda")
load(file = "data2.Rda")
library(ggplot2)
ggplot()+geom_point(data= df1, aes(x=x,y=y))+geom_point(data= df2, aes(x=x,y=y))
Question worded differently -
How would the above need to be executed inside a project?
I have looked at the following tutorials -
https://r4ds.had.co.nz/workflow-projects.html
https://martinctc.github.io/blog/rstudio-projects-and-working-directories-a-beginner's-guide/
https://swcarpentry.github.io/r-novice-gapminder/02-project-intro/
https://www.tidyverse.org/blog/2017/12/workflow-vs-script/
https://chrisvoncsefalvay.com/2018/08/09/structuring-r-projects/
https://support.rstudio.com/hc/en-us/articles/200526207-Using-Projects
I have broken my answer into three parts:
The question in your title
The reworded question in your text
What I, based on your comments, believe you are actually asking
How to transfer my files to R Projects, and then to GitHub?
From RStudio, just create a new project and move your files to this folder. You can then initialize this folder with git using git init.
How would [my included code] need to be executed inside a project?
You don't need to change anything in your example code. If you just place your files in a project folder they will run just fine.
An R project mainly takes care of the following for you:
Working directory (it's always set to the project folder)
File paths (all paths are relative to the project root folder)
Settings (you can set project specific settings)
Further, many external packages are meant to work with projects, making many task easier for you. A project is also a very good starting point for sharing your code with Git.
What would be a good workflow for working with multiple scripts in an R project?
One common way of organizing multiple scripts is to make a new script calling the other scripts in order. Typically, I number the scripts so it's easy to see the order to call them. For example, here I would create 00_main.R and include the code:
source("01_data.R")
source("02_data.R")
source("03_graph.R")
Note that I've renamed your scripts to make the order clear.
In your code, you do not need to save the data to pass it between the scripts. The above code would run just fine if you delete the save() and load() parts of your code. The objects created by the scripts would still be in your global environment, ready for the next script to use them.
If you do need to save your data, I would save it to a folder named data/. The output from your plot I would probably save to outputs/ or plots/.
When you get used to working with R, the next step to organize your code is probably to create a package instead of using only a project. You can find all the information you need in this book.

Read partitioned parquet directory (all files) in one R dataframe with apache arrow

How do I read a partitioned parquet file into R with arrow (without any spark)
The situation
created parquet files with a Spark pipe and save on S3
read with RStudio/RShiny with one column as index to do further analysis
The parquet file structure
The parquet files created from my Spark consists of several parts
tree component_mapping.parquet/
component_mapping.parquet/
├── _SUCCESS
├── part-00000-e30f9734-71b8-4367-99c4-65096143cc17-c000.snappy.parquet
├── part-00001-e30f9734-71b8-4367-99c4-65096143cc17-c000.snappy.parquet
├── part-00002-e30f9734-71b8-4367-99c4-65096143cc17-c000.snappy.parquet
├── part-00003-e30f9734-71b8-4367-99c4-65096143cc17-c000.snappy.parquet
├── part-00004-e30f9734-71b8-4367-99c4-65096143cc17-c000.snappy.parquet
├── etc
How do I read this component_mapping.parquet into R?
What I tried
install.packages("arrow")
library(arrow)
my_df<-read_parquet("component_mapping.parquet")
but this fails with the error
IOError: Cannot open for reading: path 'component_mapping.parquet' is a directory
It works if I just read one file of the directory
install.packages("arrow")
library(arrow)
my_df<-read_parquet("component_mapping.parquet/part-00000-e30f9734-71b8-4367-99c4-65096143cc17-c000.snappy.parquet")
but I need to load all in order to query on it
What I found in the documentation
In the apache arrow documentation
https://arrow.apache.org/docs/r/reference/read_parquet.html and
https://arrow.apache.org/docs/r/reference/ParquetReaderProperties.html
I found that there area some properties for the read_parquet() command but I can't get it working and do not find any examples.
read_parquet(file, col_select = NULL, as_data_frame = TRUE, props = ParquetReaderProperties$create(), ...)
How do I set the properties correctly to read the full directory?
# should be this methods
$read_dictionary(column_index)
or
$set_read_dictionary(column_index, read_dict)
Help would be very appreciated
As #neal-richardson alluded to in his answer, more work has been done on this, and with the current arrow package (I'm running 4.0.0 currently) this is possible.
I noticed your files used snappy compression, which requires a special build flag before installation. (Installation documentation here: https://arrow.apache.org/docs/r/articles/install.html)
Sys.setenv("ARROW_WITH_SNAPPY" = "ON")
install.packages("arrow",force = TRUE)
The Dataset API implements the functionality you are looking for, with multi-file datasets. While the documentation does not yet include a wide variety of examples, it does provide a clear starting point. https://arrow.apache.org/docs/r/reference/Dataset.html
The example below shows a minimal example of reading a multi-file dataset from a given directory and converting it to an in-memory R data frame. The API also supports filtering criteria and selecting a subset of columns, though I'm still trying to figure out the syntax myself.
library(arrow)
## Define the dataset
DS <- arrow::open_dataset(sources = "/path/to/directory")
## Create a scanner
SO <- Scanner$create(DS)
## Load it as n Arrow Table in memory
AT <- SO$ToTable()
## Convert it to an R data frame
DF <- as.data.frame(AT)
Solution for: Read partitioned parquet files from local file system into R dataframe with arrow
As I would like to avoid using any Spark or Python on the RShiny server I can't use the other libraries like sparklyr, SparkR or reticulate and dplyr as described e.g. in How do I read a Parquet in R and convert it to an R DataFrame?
I solved my task now with your proposal using arrow together with lapply and rbindlist
my_df <-data.table::rbindlist(lapply(Sys.glob("component_mapping.parquet/part-*.parquet"), arrow::read_parquet))
looking forward until the apache arrow functionality is available
Thanks
Reading a directory of files is not something you can achieve by setting an option to the (single) file reader. If memory isn't a problem, today you can lapply/map over the directory listing and rbind/bind_rows into a single data.frame. There's probably a purrr function that does this cleanly. In that iteration over the files, you also can select/filter on each if you only need a known subset of the data.
In the Arrow project, we're actively developing a multi-file dataset API that will let you do what you're trying to do, as well as push down row and column selection to the individual files and much more. Stay tuned.
Solution for: Read partitioned parquet files from S3 into R dataframe using arrow
As it tooked me now very long to figure out a solution and I was not able to find anything in the web I would like to share this solution on how to read partitioned parquet files from S3
library(arrow)
library(aws.s3)
bucket="mybucket"
prefix="my_prefix"
# using aws.s3 library to get all "part-" files (Key) for one parquet folder from a bucket for a given prefix pattern for a given component
files<-rbindlist(get_bucket(bucket = bucket,prefix=prefix))$Key
# apply the aws.s3::s3read_using function to each file using the arrow::read_parquet function to decode the parquet format
data <- lapply(files, function(x) {s3read_using(FUN = arrow::read_parquet, object = x, bucket = bucket)})
# concatenate all data together into one data.frame
data <- do.call(rbind, data)
What a mess but it works.
#neal-richardson is there a using arrow directly to read from S3? I couldn't find something in the documentation for R
I am working on this package to make this easier. https://github.com/mkparkin/Rinvent
Right now it can read from Local, AWS S3 or Azure Blob. parquet files or deltafiles
# read parquet from local with where condition in the partition
readparquetR(pathtoread="C:/users/...", add_part_names=F, sample=F, where="sku=1 & store=1", partition="2022")
#read local delta files
readparquetR(pathtoread="C:/users/...", format="delta")
your_connection = AzureStor::storage_container(AzureStor::storage_endpoint(your_link, key=your_key), "your_container")
readparquetR(pathtoread="blobpath/subdirectory/", filelocation = "azure", format="delta", containerconnection = your_connection)

Saving H2o data frame

I am working with 10GB training data frame. I use H2o library for faster computation. Each time I load the dataset, I should convert the data frame into H2o object which is taking so much time. Is there a way to store the converted H2o object ? (so that i can skip the as.H2o(trainingset) step each time I make trails on building models )
After the first transformation with as.h2o(trainingset) you can export / save the file to disk and later import it again.
my_h2o_training_file <- as.h2o(trainingset)
path <- "whatever/my/path/is"
h2o.exportFile(my_h2o_training_file , path = path)
And when you want to load it use either h2o.importFile or h2o.importFolder. See the function help for correct usage.
Or save the file as csv / txt before you transform it with as.h2o and load it directly into h2o with one of the above functions.
as.h2o(d) works like this (even when client and server are the same machine):
In R, export d to a csv file in a temp location
Call h2o.uploadFile() which does an HTTP POST to the server, then a single-threaded import.
Returns the handle from that import
Deletes the temp csv file it made.
Instead, prepare your data in advance somewhere(*), then use h2o.importFile() (See http://docs.h2o.ai/h2o/latest-stable/h2o-r/docs/reference/h2o.importFile.html). This saves messing around with the local file, and it can also do a parallelized read and import.
*: For speediest results, the "somewhere" should be as close to the server as possible. For it to work at all, the "somewhere" has to be somewhere the server can see. If client and server are the same machine, then that is automatic. At the other extreme, if your server is a cluster of machines in an AWS data centre on another continent, then putting the data into S3 works well. You can also put it on HDFS, or on a web server.
See http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-munging/importing-data.html for some examples in both R and Python.

Export R data frame to MS Access

I am trying to export a data frame from R to MS Access but it seems to me that there is no package available to do this task. Is there a way to export a data frame directly to Access? Any help will be greatly appreciated.
The following works for medium sized datasets, but may fail if MyRdataFrame is too large for the 2GB limit of Access or conversion type errors.
library(RODBC)
db <- "C:Documents/PreviouslySavedBlank.accdb"
Mycon <- odbcConnectAccess2007(db)
sqlSave(Mycon, MyRdataFrame)
There is the ImportExport package.
The database has to already exist (at least in my case). So you have to create it first.
It has to be a access database 2000 version with extension .mdb
Here is an example:
ImportExport::access_export("existing_databse.mdb",as.data.frame(your_R_data),
tablename="bob")
with "bob" the name of the table you want to create in the database. Choose your own name of course and it has to be a non already existing table
It will also add a first column called rownames which is just an index column
Note that creating a .accdb file and then changing the extension to .mdb wont work ^^ you really have to open it and save it as .mdb. I added as.data.frame() but if your data is already one then no need.
There might be a way for .accdb files using directly sqlSave (which is used internally by ImportExport) and specifying the driver from the RODBC package. This is in the link in the comment from #BenJacobson. But the solution above worked for me and it was only one line.

Where to store .xls file for xlsReadWrite in R

I am relatively new to R and am having some trouble with how to access my data. I have my test.xls file created in my MYDocuments. How to I access it from R
library(xlsReadWrite)
DF1 <- read.xls("test.xls") # read 1st sheet
Set the working directory with:
setwd("C:/Documents and Settings/yourname/My Documents")
This link may be useful as a method of making working folders per project and then placing all relevant info in that folder. It's a nice tutorial for making project files that contain everything you need. This is one approach.
http://www.dangoldstein.com/flash/Rtutorial2/Rtutorial2.html
The setwd() is another approach. I use a combination of the two in my work.

Resources