I have a Postgres database. I want to find the minimum value of a column called calendarid, which is of type integer and the format yyyymmdd, from a certain table. I am able to do so via the following code.
get_history_startdate <- function(src) {
get_required_table(src) %>% # This gives me the table tbl(src, "table_name")
select(calendarid) %>%
as_data_frame %>%
collect() %>%
min() # Result : 20150131
}
But this method is really slow as it loads all the data from the database to the memory. Any ideas how can I improve it?
get_required_table(src) %>%
summarise(max(calendarid, na.rm = TRUE)) %>%
pull
will run the appropriate SQL query.
If you just want the minimum value of the calendarid column across the entire table, then use this:
SELECT MIN(calendarid) AS min_calendarid
FROM your_table;
I don't exactly what your R code is doing under the hood, but if it's bringing in the entire table from Postgres into R, then it is very wasteful. If so, then running the above query directly on Postgres should give you a boost in performance.
Related
What is the dbplyr verbs combination that is equivalent to DBI::dbSendQuery(con, "DELETE FROM <table> WHERE <condition>").
What I want is not querying data from database, but removing data from and updating a table in database.
I want to do it in a dplyr way, but I am not sure if it is possible. I could not find anything similar in the package reference.
dbplyr translates dplyr commands to query database tables. I am not aware of any inbuilt way to modify existing database tables using pure dbplyr.
This is likely a design choice.
Within R we do not need to distinguish between fetching data from a table (querying) and modifying a table. This is probably because in R we can reload the original data into memory if an error/mistake occurs.
But in databases querying and modifying a table are deliberately different things. When modifying a database, you are modifying the source so additional controls are used (because recovering deleted data is a lot harder).
The DBI package is probably your best choice for modifying the database
This is the approach I use for all my dbplyr work. Often a custom function that takes the query produced by dbplyr translation and inserting it into a DBI call (you can see examples of this in my dbplyr helpers GitHub repo).
Two approaches to consider for this: (1) an anti-join (on all columns) followed by writing a new table, (2) the DELETE FROM syntax.
Mock up of anti-join approach
records_to_remove = remote_table %>%
filter(conditions)
desired_final_table = remote_table %>%
anti_join(records_to_remove, by = colnames(remote_table))
query = paste0("SELECT * INTO output_table FROM (",
sql_render(desired_final_table),
") AS subquery")
DBI::dbExecute(db_con, as.character(query))
Mock up of DELETE FROM syntax
records_to_remove = remote_table %>%
filter(conditions)
query = sql_render(records_to_remove) %>%
as.character() %>%
gsub(search_term = "SELECT *", replacement_term = "DELETE")
DBI::dbExecute(db_con, query)
If you plan to run these queries multiple times, then wrapping them in a function, with checks for validity would be recommended.
For some use cases deleting rows will not be necessary.
You could think of the filter command in R as deleting rows from a table. For example in R we might run:
prepared_table = input_table %>%
filter(colX == 1) %>%
select(colA, colB, colZ)
And think of this as deleting rows where colX == 1 before producing output:
output = prepared_table %>%
group_by(colA) %>%
summarise(sumZ = sum(colZ))
(Or you could use an anti-join above instead of a filter.)
But for this type of deleting, you do not need to edit the source data, as you can just filter out the unwanted rows at runtime every time. Yes it will make your database query larger, but this is normal for working with databases.
So combining the preparation and output in SQL is normal (something like this):
SELECT colA, SUM(colZ) AS sumZ
FROM (
SELECT colA, colB, colZ
FROM input_table
WHERE colX = 1
) AS prepared_table
GROUP BY colA
So unless you need to modify the database, I would recommend filtering instead of deleting.
I have a .csv file that contains 105M rows and 30 columns that I would like to query for plotting in an R shiny app.
it contains alpha-numeric data that looks like:
#Example data
df<-as.data.frame(percent=as.numeric(rep(c("50","80"),each=5e2)),
maskProportion=as.numeric(rep(c("50","80")),each=5e2),
dose=runif(1e3),
origin=as.factor(rep(c("ABC","DEF"),each=5e2)),
destination=as.factor(rep(c("XYZ","GHI"),each=5e2))
)
write.csv(df,"PassengerData.csv")
In the terminal, I have ingested it into an SQLite database as follows:
$ sqlite3 -csv PassengerData.sqlite3 '.import PassengerData.csv df'
which is from:
Creating an SQLite DB in R from an CSV file: why is the DB file 0KB and contains no tables?
So far so good.
The problem I have is speed in querying in R so I tried indexing the DB back in the terminal.
In sqlite3, I tried creating indexes on percent, maskProportion, origin and destination following this link https://data.library.virginia.edu/creating-a-sqlite-database-for-use-with-r/ :
$ sqlite3 create index "percent" on PassengerData("percent");
$ sqlite3 create index "origin" on PassengerData("origin");
$ sqlite3 create index "destination" on PassengerData("destination");
$ sqlite3 create index "maskProp" on PassengerData("maskProp");
I run out of disk space because my DB seems to grow in size every time I make an index. E.g. after running the first command the size is 20GB. How can I avoid this?
I assume the concern is that running collect() to transfer data from SQL to R is too slow for your app. It is not clear how / whether you are processing the data in SQL before passing to R.
Several things to consider:
Indexes are not copied from SQL to R. SQL works with data off disk, so knowing where to look up specific parts of your data result in time savings. R works on data in memory so indexes are not required.
collect transfers data from a remote table (in this case SQLite) into R memory. If your goal is to transfer data into R, you could read a csv direct into R instead of writing it to SQL and then reading from SQL into R.
SQL is a better choice for doing data crunching / preparation of large datasets, and R is a better choice for detailed analysis and visualisation. But if both R and SQL are running on the same machine then both are limited by the cpu speed. Not a concern is SQL and R are running on separate hardware.
Some things you can do to improve performance:
(1) Only read the data you need from SQL into R. Prepare the data in SQL first. For example, contrast the following:
# collect last
local_r_df = remote_sql_df %>%
group_by(origin) %>%
summarise(number = n()) %>%
collect()
# collect first
local_r_df = remote_sql_df %>%
collect() %>%
group_by(origin) %>%
summarise(number = n())
Both of these will produce the same output. However, in the first example, the summary takes place in SQL and only the final result is copied to R; while in the second example, the entire table is copied to R where it is then summarized. Collect last will likely have better performance than collect first because it transfers only a small amount of data between SQL and R.
(2) Preprocess the data for your app. If your app will only examine the data from a limited number of directions, then the data could be preprocessed / pre-summarized.
For example, suppose users can pick at most two dimensions and receive a cross-tab, then you could calculate all the two-way cross-tabs and save them. This is likely to be much smaller than the entire database. Then at runtime, your app loads the prepared summaries and shows the user any summary they request. This will likely be much faster.
I've connected to a SQL Server database with the code shown here, and then I try to run a query to collect data filtered on a date, which is held as an integer in the table in YYYYMMDD format
con <- DBI::dbConnect(odbc::odbc(), driver = "SQL Server", server = "***")
fact_transaction_line <- tbl(con,in_schema('***', '***'))
data <- fact_transaction_line %>%
filter(key_date_trade == 20200618)
This stores as a query, but fails when I use glimpse to look at the data, with the below error
"dbplyr_031"
WHERE ("key_date_trade" = 20200618.0)'
Why isn't this working, is there a better way for me to format the query to get this data?
Both fact_transaction_line and data in your example code are remote tables. One important consequence of this is that you are limited to interacting with them to certain dplyr commands. glimpse may not be a command that is supported for remote tables.
What you can do instead (including #Bruno's suggestions):
Use head to view the top few rows of your remote data.
If you are receiving errors, try show_query(data) to see the underlying SQL query for the remote table. Check that this query is correct.
Check the size of the remote table with remote_table%>% ungroup() %>% summarise(num = n()). If the remote table is small enough to fit into your local R memory, then local_table = collect(remote_table) will copy the table into R memory.
Combine options 1 & 3: local_table = data %>% head(100) %>% collect() will load the first 100 rows of your remote table into R. Then you can glimpse(local_table).
I am using Sparklyr for a project and have understood that persisting is very useful. I am using sdf_persist for this, with the following syntax (correct me if I am wrong):
data_frame <- sdf_persist(data_frame)
Now I am reaching a point where I have too many RDDs stored in memory, so I need to unpersist some. However I cannot seem to find the function to do this in Sparklyr. Note that I have tried:
dplyr::db_drop_table(sc, "data_frame")
dplyr::db_drop_table(sc, data_frame)
unpersist(data_frame)
sdf_unpersist(data_frame)
But none of those work.
Also, I am trying to avoid using tbl_cache (in which case it seems that db_drop_table works) as it seems that sdf_persist offers more liberty on the storage level. It might be that I am missing the big picture of how to use persistence here, in which case, I'll be happy to learn more.
If you don't care about granularity then the simplest solution is to invoke Catalog.clearCache:
spark_session(sc) %>% invoke("catalog") %>% invoke("clearCache")
Uncaching specific object is much less straightforward due to sparklyr indirection. If you check the object returned by sdf_cache you'll see that the persisted table is not exposed directly:
df <- copy_to(sc, iris, memory=FALSE, overwrite=TRUE) %>% sdf_persist()
spark_dataframe(df) %>%
invoke("storageLevel") %>%
invoke("equals", invoke_static(sc, "org.apache.spark.storage.StorageLevel", "NONE"))
[1] TRUE
That's beacuase you don't get registered table directly, but rather a result of subquery like SELECT * FROM ....
It means you cannot simply call unpersist:
spark_dataframe(df) %>% invoke("unpersist")
as you would in one of the official API's.
Instead you can try to retrieve the name of the source table, for example like this
src_name <- as.character(df$ops$x)
and then invoke Catalog.uncacheTable:
spark_session(sc) %>% invoke("catalog") %>% invoke("uncacheTable", src_name)
That is likely not the most robust solution, so please use with caution.
I was looking for a way to make spark_write_csv to upload only a single file to S3 because I want to save the regression result on S3. I was wondering if options has some parameter which defines number of partitions. I could not find it anywhere in the documentation. Or is there any other efficient way to upload resultant table to S3?
Any help is appreciated!
options argument is equivalent options call on the DataFrameWriter (you can check DataFrameWriter.csv documentation for a full list of options specific to CSV source) and it cannot be used to control the number of the output partitions.
While in general it is not recommended, you can use Spark API to coalesce data and convert it back to sparklyr tbl:
df %>%
spark_dataframe() %>%
invoke("coalesce", 1L) %>%
invoke("createOrReplaceTempView", "_coalesced")
tbl(sc, "_coalesced") %>% spark_write_csv(...)
or, in the recent versions, sparklyr::sdf_coalesce
df %>% sparklyr::sdf_coalesce()