FullCalendar Scheduler - Displaying multiple agendaDays for multiple items in "resources" object? - fullcalendar

I have a requirement, wherein, I wish to split a vertical resource view, even more vertically.
For example, please consider the example given here.
In this example, there are 4 columns, viz. Rooms "A" to "D", for a single day.
.
What I wish to have is 4 sets of "single day rows", one below the other, with a header of the Room title for each set.
For example:
[..................Header Info ................]
[. (including Day Title and filters). ]
[..............................................]
[................... Room A ...................]
[12 AM: .......................................]
[11 PM: .......................................]
[..............................................]
[................... Room B...................]
[12 AM: .......................................]
[11 PM: .......................................]
[..............................................]
and so on..
I tried looking for any kind of online documentation of the same, but was unable to find anything.

Related

Applying a System Call for ImageJ over a List in R

I am working with a large number of image files within several subdirectories of one parent folder.
I am attempting to run an ImageJ macro to batch-process the images (specifically, I am trying to stitch together a series of images taken on the microscope into single images). Unfortunately, I don't think I can't run this as an ImageJ Macro because the images were taken with varying grid sizes, ie some are 2x3, some are 3x3, some are 3x2, etc.
I've written an R script that is able to evaluate the image folders and determine the grid size, now I am trying to feed that information to my ImageJ macro to batch process the folder.
The issue I am running into seems like it should be easy to solve, but I haven't had any luck figuring it out: in R, I have a data.frame that I need to pass to the system command line-by-line with the columns concatenated into a single character string delimited by *'s.
Here's an example from the data.frame I have in R:
X xcoord ycoord input
1 4_10249_XY01_Fused_CH2 2 3 /XY01
2 4_10249_XY02_Fused_CH2 2 2 /XY02
3 4_10249_XY03_Fused_CH2 3 3 /XY03
4 4_10249_XY04_Fused_CH2 2 2 /XY04
5 4_10249_XY05_Fused_CH2 2 2 /XY05
6 4_10249_XY06_Fused_CH2 2 3 /XY06
Here's what each row needs to be transformed into so that ImageJ can understand it:
4_10249_XY01_Fused_CH2*2*3*/XY01
4_10249_XY02_Fused_CH2*2*2*/XY02
4_10249_XY03_Fused_CH2*3*3*/XY03
4_10249_XY04_Fused_CH2*2*2*/XY04
4_10249_XY05_Fused_CH2*2*2*/XY05
4_10249_XY06_Fused_CH2*2*3*/XY06
I tried achieving this with a for loop inside of a function that I thought would pass each row into the system command, but the macro only runs for the first line, none of the others.
macro <- function(i) {
for (row in 1:nrow(i)) {
df<-paste(i$X, i$xcoord, i$ycoord, i$input, sep='*')
}
system2('/Applications/Fiji.app/Contents/MacOS/ImageJ-macosx', args=c('-batch "/Users/All Stitched CH2.ijm"', df))
}
macro(table)
I think this is because the for loop is not maintaining the list-form of the data.frame. How do I concatenate the table by row and maintain the list-structure? I don't know if I'm asking the right question, but hopefully I'm close enough that someone here understands what I'm trying to do.
I appreciate any help or tips you can provide!
Turns out taking a break helps a lot!
I came back to this after lunch and came up with an easy solution (duh!)- I thought I would post it in case anyone comes along later with a similar issue.
I used stringr to combine my datatable by columns, then put them back into list form using as.list. Finally, for feeding the list into my macro, I edited the macro to only contain the system command and then used lapply to apply the macro to my list of inputs. Here is what my code looks like in the end:
library(stringr)
tablecombined<- str_c(table$X, table$xcoord, table$ycoord, table$input, sep = "*")
listylist<-as.list(tablecombined)
macro <- function(i) {
system2('/Applications/Fiji.app/Contents/MacOS/ImageJ-macosx', args=c('-batch "/Users/All Stitched CH2.ijm"', i))
}
runme<- lapply(listylist, macro)
Note: I am using the system2 command because it can take arguments, which is necessary for me to be able to feed it a series of images to iterate over. I started with the solution posted here: How can I call/execute an imageJ macro with R?
but needed additional flexibility for my specific situation. Hopefully someone may find this useful in the future when running ImageJ Macros from R!

How to combine many records value into one record

As you can see from the below picture I was able to combine two deals (blocked red) but the output should have one result instead of two. If anyone has any solutions on this please advise.
The red blocked component has more than one record, each record has an amount, the sum of all record amount must be shown in a single row.
record1: Amount:100
record2: Amount:200
record3: Amount:500
Merge of all records is following
record: Amount:800
Is it possible to merge many rows into a single row in integromat?
Based on your screenshot you aggregate an incorrect module. Source module in your aggregator has to be set to a module that generates multiple modules, in your case, it is module 10.
You aggregate module 14 that generates for every input module a single output module, there is nothing to aggregate. Module 10 returns for a single input 2 bundles.
Your case:
/---[6]---([14]---[11 aggregator])---
---[10] multiple output bundles
\---[6]---([14]---[11 aggregator])---
Solution:
/---[6]---[14]---\
---([10] [11 aggregator])--- single output bundle
\---[6]---[14]---/
Your scenario has to look like this (Aggregator: Source module = module no.10):

CreateML Recommender Training Error: Item IDs in the recommender model must be numbered 0, 1, ..., num_items - 1

I'm using CreateML to generate a Recommender model using an implicit dataset of the format: User ID, Item ID. The data is loaded into CreateML as a CSV with about 400k rows.
When attempting to 'Train' the model, I receive the following error:
Training Error: Item IDs in the recommender model must be numbered 0, 1, ..., num_items - 1
My dataset is in the following format:
"user_id","item_id"
"e7ca1b039bca4f81a33b21acc202df24","f7267c60-6185-11ea-b8dd-0657986dc989"
"1cd4285b19424a94b33ad6637ec1abb2","e643af62-6185-11ea-9d27-0657986dc989"
"1cd4285b19424a94b33ad6637ec1abb2","f2fd13ce-6185-11ea-b210-0657986dc989"
"1cd4285b19424a94b33ad6637ec1abb2","e95864ae-6185-11ea-a254-0657986dc989"
"31042cbfd30c42feb693569c7a2d3f0a","e513a2dc-6185-11ea-9b4c-0657986dc989"
"39e95dbb21854534958d53a0df33cbf2","f27f62c6-6185-11ea-b14c-0657986dc989"
"5c26ca2918264a6bbcffc37de5079f6f","ec080d6c-6185-11ea-a6ca-0657986dc989"
I've tried modifying both Item ID and User ID to enumerated IDs, but I still receive the training error. Example:
"item_ids","user_ids"
0,0
1,0
2,0
2,0
0,225
400,225
409,225
0,282
0,4
8,4
8,4
I receive this error both within the CreateML UI and when using CreateML within a Swift playground. I've also tried removing duplicates and verified that the maximum ID for each column is (num_items - 1).
I've searched for documentation on what the exact requirement is for the set of IDs with no luck.
Thank you in advance for any helping clarifying this error message.
I was able to discuss this issue with Apple's CoreML developers during WWDC2020. They described this as a known bug which will be fixed with the upcoming OS (Big Sur). The work-around for this bug is:
In the CSV dataset, create records for a single user which interacts with ALL items, and create records for a single item interacted with by ALL users.
Using pandas in python, I essentially implemented the following:
# Find the unique item ids
item_ids = ratings_df.item_id.unique()
# Find the unique user ids
user_ids = ratings_df.user_id.unique()
# Create a 'dummy user' which interacts with all items
mock_item_interactions_df = pd.DataFrame({'item_id': item_ids, 'user_id': 'mock-user'})
ratings_with_mocks_df = ratings_df.append(mock_item_interactions_df)
# Create a 'dummy item' which interacts with all users
mock_item_interactions_df = pd.DataFrame({'item_id': 'mock-item', 'user_id': user_ids})
ratings_with_mocks_df = ratings_with_mocks_df.append(mock_item_interactions_df)
# Export the CSV
ratings_with_mocks_df.to_csv('data/ratings-w-mocks.csv', quoting=csv.QUOTE_NONNUMERIC, index=True)
Using this CSV, I successfully generated a CoreML model using CreateML.
Try adding unnamed first column to your csv data which counts rows from 0 ... number of items - 1
like
"","userID","itemID","rating"
0,"a","x",1
1,"a","y",0
...
I think today after adding this column it started working for me. I use UUID for userID and itemID in my training model. and be sure to sort rows by itemID so all for one itemID are close to each other

Matching columns of two datasets

I've been assigned this problem where I need to match mobile apps and publishers between two data sets (One is GooglePlay, the other is iTunes).
Here is a description of the variables used in the iTunes dataset (The Google Play data set variables names are similair or the same).
anon_ios_app_id: anonymized iOS app id
anon_ios_publisher_id: anonymized iOS publisher id
points: the “worth” of the match, 10 points is highest worth and 0.5 is the lowest.
ios_name: name of the mobile app in the itunes store
ios_publisher_name: name of the publisher of the app in the itunes store
category_name: the category of the app
type: Game or Non-game
I've done some analysis to look for the names of apps in the data sets that share the same name and publishers. As an example, I searched for apps that had "Walmart" in their names.
GooglePlay <- read.csv("...\\GooglePlay.csv", header = TRUE)
iTunes <- read.csv("...\\iTunes.csv", header = TRUE)
grep("Walmart", iTunes$ios_name)
[1] 41203 51026 63522 64330 112441 113516 115510 117588 117788 119558 119605 120002 165514 248817
[15] 277425 290010 463244 546799 565806
grep("Walmart", GooglePlay$gp_name)
[1] 154 31984 162284 162342 162792 168722 168774 169339 325520 325601 357122 360050 436084 437144
[15] 441458 447177 503260
During my analysis, I did find that some apps had the same name and publisher in both data sets. For example
GooglePlay$gp_name[154]
[1] Walmart Photo
GooglePlay$gp_publisher_name[154]
[1] Kodak Alaris Inc.
iTunes$ios_name[165514]
[1] Walmart Photo
iTunes$ios_publisher_name[165514]
[1] Kodak Alaris Inc.
My objectives are: 1. Provide one unified file with all the respective id’s/names of the matched apps/publishers.
2. provide one number: the SUM(iOS points + GP points) for the matched apps.
What functions should I use to match apps and publishers from the two data sets? How do I make a unified file of those matches?

How can I use R (Rcurl/XML packages ?!) to scrape this webpage?

I have a (somewhat complex) web scraping challenge that I wish to accomplish and would love for some direction (to whatever level you feel like sharing) here goes:
I would like to go through all the "species pages" present in this link:
http://gtrnadb.ucsc.edu/
So for each of them I will go to:
The species page link (for example: http://gtrnadb.ucsc.edu/Aero_pern/)
And then to the "Secondary Structures" page link (for example: http://gtrnadb.ucsc.edu/Aero_pern/Aero_pern-structs.html)
Inside that link I wish to scrap the data in the page so that I will have a long list containing this data (for example):
chr.trna3 (1-77) Length: 77 bp
Type: Ala Anticodon: CGC at 35-37 (35-37) Score: 93.45
Seq: GGGCCGGTAGCTCAGCCtGGAAGAGCGCCGCCCTCGCACGGCGGAGGcCCCGGGTTCAAATCCCGGCCGGTCCACCA
Str: >>>>>>>..>>>>.........<<<<.>>>>>.......<<<<<.....>>>>>.......<<<<<<<<<<<<....
Where each line will have it's own list (inside the list for each "trna" inside the list for each animal)
I remember coming across the packages Rcurl and XML (in R) that can allow for such a task. But I don't know how to use them. So what I would love to have is:
1. Some suggestion on how to build such a code.
2. And recommendation for how to learn the knowledge needed for performing such a task.
Thanks for any help,
Tal
Tal,
You could use R and the XML package to do this, but (damn) that is some poorly formed HTML you are trying to parse. In fact, in most cases your would want to be using the readHTMLTable() function, which is covered in this previous thread.
Given this ugly HTML, however, we will have to use the RCurl package to pull the raw HTML and create some custom functions to parse it. This problem has two components:
Get all of the genome URLS from the base webpage (http://gtrnadb.ucsc.edu/) using the getURLContent() function in the RCurlpackage and some regex magic :-)
Then take that list of URLS and scrape the data you are looking for, and then stick it into a data.frame.
So, here goes...
library(RCurl)
### 1) First task is to get all of the web links we will need ##
base_url<-"http://gtrnadb.ucsc.edu/"
base_html<-getURLContent(base_url)[[1]]
links<-strsplit(base_html,"a href=")[[1]]
get_data_url<-function(s) {
u_split1<-strsplit(s,"/")[[1]][1]
u_split2<-strsplit(u_split1,'\\"')[[1]][2]
ifelse(grep("[[:upper:]]",u_split2)==1 & length(strsplit(u_split2,"#")[[1]])<2,return(u_split2),return(NA))
}
# Extract only those element that are relevant
genomes<-unlist(lapply(links,get_data_url))
genomes<-genomes[which(is.na(genomes)==FALSE)]
### 2) Now, scrape the genome data from all of those URLS ###
# This requires two complementary functions that are designed specifically
# for the UCSC website. The first parses the data from a -structs.html page
# and the second collects that data in to a multi-dimensional list
parse_genomes<-function(g) {
g_split1<-strsplit(g,"\n")[[1]]
g_split1<-g_split1[2:5]
# Pull all of the data and stick it in a list
g_split2<-strsplit(g_split1[1],"\t")[[1]]
ID<-g_split2[1] # Sequence ID
LEN<-strsplit(g_split2[2],": ")[[1]][2] # Length
g_split3<-strsplit(g_split1[2],"\t")[[1]]
TYPE<-strsplit(g_split3[1],": ")[[1]][2] # Type
AC<-strsplit(g_split3[2],": ")[[1]][2] # Anticodon
SEQ<-strsplit(g_split1[3],": ")[[1]][2] # ID
STR<-strsplit(g_split1[4],": ")[[1]][2] # String
return(c(ID,LEN,TYPE,AC,SEQ,STR))
}
# This will be a high dimensional list with all of the data, you can then manipulate as you like
get_structs<-function(u) {
struct_url<-paste(base_url,u,"/",u,"-structs.html",sep="")
raw_data<-getURLContent(struct_url)
s_split1<-strsplit(raw_data,"<PRE>")[[1]]
all_data<-s_split1[seq(3,length(s_split1))]
data_list<-lapply(all_data,parse_genomes)
for (d in 1:length(data_list)) {data_list[[d]]<-append(data_list[[d]],u)}
return(data_list)
}
# Collect data, manipulate, and create data frame (with slight cleaning)
genomes_list<-lapply(genomes[1:2],get_structs) # Limit to the first two genomes (Bdist & Spurp), a full scrape will take a LONG time
genomes_rows<-unlist(genomes_list,recursive=FALSE) # The recursive=FALSE saves a lot of work, now we can just do a straigh forward manipulation
genome_data<-t(sapply(genomes_rows,rbind))
colnames(genome_data)<-c("ID","LEN","TYPE","AC","SEQ","STR","NAME")
genome_data<-as.data.frame(genome_data)
genome_data<-subset(genome_data,ID!="</PRE>") # Some malformed web pages produce bad rows, but we can remove them
head(genome_data)
The resulting data frame contains seven columns related to each genome entry: ID, length, type, sequence, string, and name. The name column contains the base genome, which was my best guess for data organization. Here it what it looks like:
head(genome_data)
ID LEN TYPE AC SEQ
1 Scaffold17302.trna1 (1426-1498) 73 bp Ala AGC at 34-36 (1459-1461) AGGGAGCTAGCTCAGATGGTAGAGCGCTCGCTTAGCATGCGAGAGGtACCGGGATCGATGCCCGGGTTTTCCA
2 Scaffold20851.trna5 (43038-43110) 73 bp Ala AGC at 34-36 (43071-43073) AGGGAGCTAGCTCAGATGGTAGAGCGCTCGCTTAGCATGCGAGAGGtACCGGGATCGATGCCCGGGTTCTCCA
3 Scaffold20851.trna8 (45975-46047) 73 bp Ala AGC at 34-36 (46008-46010) TGGGAGCTAGCTCAGATGGTAGAGCGCTCGCTTAGCATGCGAGAGGtACCGGGATCGATGCCCGGGTTCTCCA
4 Scaffold17302.trna2 (2514-2586) 73 bp Ala AGC at 34-36 (2547-2549) GGGGAGCTAGCTCAGATGGTAGAGCGCTCGCTTAGCATGCGAGAGGtACAGGGATCGATGCCCGGGTTCTCCA
5 Scaffold51754.trna5 (253637-253565) 73 bp Ala AGC at 34-36 (253604-253602) CGGGGGCTAGCTCAGATGGTAGAGCGCTCGCTTAGCATGCGAGAGGtACCGGGATCGATGCCCGGGTCCTCCA
6 Scaffold17302.trna4 (6027-6099) 73 bp Ala AGC at 34-36 (6060-6062) GGGGAGCTAGCTCAGATGGTAGAGCGCTCGCTTAGCATGCGAGAGGtACCGGGATCGATGCCCGAGTTCTCCA
STR NAME
1 .>>>>>>..>>>>........<<<<.>>>>>.......<<<<<.....>>>>>.......<<<<<<<<<<<.. Spurp
2 .>>>>>>..>>>>........<<<<.>>>>>.......<<<<<.....>>>>>.......<<<<<<<<<<<.. Spurp
3 .>>>>>>..>>>>........<<<<.>>>>>.......<<<<<.....>>>>>.......<<<<<<<<<<<.. Spurp
4 >>>>>>>..>>>>........<<<<.>>>>>.......<<<<<.....>.>>>.......<<<.<<<<<<<<. Spurp
5 .>>>>>>..>>>>........<<<<.>>>>>.......<<<<<.....>>>>>.......<<<<<<<<<<<.. Spurp
6 >>>>>>>..>>>>........<<<<.>>>>>.......<<<<<......>>>>.......<<<<.<<<<<<<. Spurp
I hope this helps, and thanks for the fun little Sunday afternoon R challenge!
Just tried it using Mozenda (http://www.mozenda.com). After roughly 10 minutes and I had an agent that could scrape the data as you describe. You may be able to get all of this data just using their free trial. Coding is fun, if you have time, but it looks like you may already have a solution coded for you. Nice job Drew.
Interesting problem and agree that R is cool, but somehow i find R to be a bit cumbersome in this respect. I seem to prefer to get the data in intermediate plain text form first in order to be able to verify that the data is correct in every step... If the data is ready in its final form or for uploading your data somewhere RCurl is very useful.
Simplest in my opinion would be to (on linux/unix/mac/or in cygwin) just mirror the entire http://gtrnadb.ucsc.edu/ site (using wget) and take the files named /-structs.html, sed or awk the data you would like and format it for reading into R.
I'm sure there would be lots of other ways also.

Resources