Importing quizzes into Canvas with supplement file - r-exams

I don't seem to be able to correctly import an exam generated with this lm template.
The exams2canvas("lm.Rmd", n = 10) function will generate the following error:
Error in switch(type, num = "numerical_question", schoice = "multiple_choice_question", :
EXPR must be a length 1 vector
I can export it with exams2qti21, but then Canvas will not offer the supplemental file (without generating any import error). This is the HTML of the question
<p> </p>
<p>Using the data provided in <a>regression.csv</a> estimate a linear regression of <code>y</code> on <code>x</code> and answer the following questions.</p>
<p><br /><br />b. Estimated slope with respect to <code>x</code>:<br /><br /><br /></p>
As you can see no href...

TL;DR
This is not about the supplementary files but about the question type. Using supplementary files with a single question (num, schoice, mchoice) works correctly in exams2canvas().
Background
The error in exams2canvas() is caused by the "cloze" question with schoice and num elements. Canvas supports only cloze questions with schoice elements placed in the text (displayed as multiple dropdowns). Hence, the question cannot be prepared for Canvas correctly.
Clearly, the error message was not helpful in pointing to this problem. In the devel version of the package I have now added an explicit error message for this.
The problems with the exams2qti21() output are probably due to insufficient customizations for Canvas. Internally, exams2canvas() interfaces exams2qti12() (QTI 1.2 not 2.1) but with several tweaks so that the resulting output can be imported correctly into Canvas. For the output from exams2qti21() we have not been able to do similar tweaks to make it work correctly.
The problem with the missing supplements is most likely caused by the default base64 = TRUE in exams2qti21(). Base64 encoding is not supported in Canvas and such content is simply stripped upon import, hence the missing href.

Related

have you ever had this error?-numbers of left and right parentheses in Newick string not equal- tree in R

The above error is occurring when trying to read a tree .
tree <- read.tree(paste0(table_dir,'tree.19.08.tre'))
I tried to reproduce the error but with other trees, for example with the iris example dataset it worked perfectly. The tree was generated with write.tree(tree, file='C:/Users/J/Desktop/proj/d/t/tree.19.08.tre',append = FALSE). I can look at tree in the programme FigTree (javascript based). Maybe it looks a bit strange but why can´t I open it in R?
Any suggestions?
That is a cutout of tree in figtree, in case it helps ;)
That is the full error:
Fehler in FUN(X[[i]], ...) : numbers of left and right parentheses in Newick string not equal
3.
FUN(X[[i]], ...)
2.
lapply(STRING, .treeBuild)
1.
read.tree(paste0(table_dir, "tree.19.08.tre"))
I had the same problem and I couldn't load it with DECIPHER either, but I made it with phytools:
library(phytools)
tree = read.newick(treeFileName)
Exploring the loaded phylo object I realized that the tree was not correct, the reason being that the leaves of the tree contained in the name semicolons, as it is costumary in some taxonomic formats, e.g.
d__Bacteria;p__Proteobacteria;c__Gammaproteobacteria
and the problem is that the semicolon indicates the end of the tree in newick format. Simply transforming the semicolons into other symbol, e.g.
d__Bacteria|p__Proteobacteria|c__Gammaproteobacteria
solved the problem.
I've got exactly the same error when loading a tree I got from a colleague.
But I was able to load it using
tree <- DECIPHER::ReadDendrogram(treeFileName)

Conditionally split dataframe in rows according to value in cell (in R!)

I have a data frame as follows. I'm putting a single row here because it is enough for the example.
df <- structure(list(issue_url = "https://api.github.com/repos/ropensci/software-review/issues/357",
user = "kauedesousa", body = "Dear #cvitolo thank you so much for your comments and suggestions. It helped a lot! We have worked in incorporating the comments to *chirps* v0.0.6 https://github.com/agrobioinfoservices/chirps. Here we have included a point-to-point response to your comments:\r\n\r\n# general comments\r\n\r\n>README, it seems the code in your README file is only visualised but not executed. In this case, you could keep the README.md and remove README.Rmd (as this is basically redundant). \r\n\r\nThe file README.Rmd was removed\r\n\r\n> Your R folder contains a file called sysdata.rda. Is there a reason to keep these data there? Usually data should be placed under the root folder in a subfolder called data. I would suggest to move sysdata under the chirps/data folder, document the datasets (documentation is currently missing for both tapajos and tapajos_geom) and load the data using data(\"sysdata\") or chirps::tapajos. If you make this change, please change the call to chirps:::tapajo in get_esi() accordingly. \r\n\r\nThe sf polygon is exported as 'tapajos', the sf POINT object is not necessary and can be generated in the examples with sf. Also, functions dataframe_to_geojson and sf_to_geojson are exported since they had the same issue using ::: in the examples and could be useful for the users\r\n\r\n> the man folder contains a figure folder. Is this good practice? I thought the man folder should be used only for documentation. Would it make sense to move the figure folder under inst, for instance? \r\n\r\nThe /man structure is part of using the pre-built vignettes. We not aware of any issues with it. CRAN hasn't baulked at it and Adam (one of the co-authors) just submitted two package updates in the last month that use this method\r\n\r\n#\ttests folder:\r\n\r\n> each of your test files contain a call to library(chirps). This is superflous because you load the package in testthat.R and that should suffice. My suggestion is to load all the packages needed by the tests in testthat.R and remove the commands library(package_name) in the individual test files.\r\n\r\nDONE\r\n\r\n> When you call library() sometimes you use library(package_name), other times library(\"package_name\"). I would suggest to consistently use the latter. \r\n\r\nDONE\r\n\r\n> test-get_chirps.R: you only test that you get the right object class but do not test the returned values, is there a reason for that? In my experience, it is very important to test the correctness of the actual data and I would suggest you to develop tests for that. \r\n\r\nWe have updated the tests so it checks if the functions return the correct values. For this we downloaded the data from ClimateSERV and compared it with the ones retrieved by get_chirps, get_esi and precip_indices to validate it. The tests still have a skip_on_cran option as a CRAN policy. But we have opened an issue in the package repo and will keep it there until we figure out how to make 'vcr' works with 'chirps' https://github.com/agrobioinfoservices/chirps/issues/7\r\n\r\n>test-get_esi.R: as above, you only test that you get the right object class but do not test the returned values. I would suggest to also test the data values. \r\n\r\nSame as above\r\n\r\n> test-precip_indicesl.R: the file name seems to contain an extra \"l\", should that not be test-precip_indices.R? you only test the dimensions of the output data frame but not the values themselves. As, above, I would suggest to also test the data values. \r\n\r\nDONE\r\n\r\n# vignettes:\r\n>The file Overview.Rmd.orig seems redundant and can be removed. \r\n\r\nWe use this file to speed up the vignette creation, here Jeroen Ooms shows how it works https://ropensci.org/technotes/2019/12/08/precompute-vignettes/\r\n\r\n>It's not necessary to run twice the command precip_indices(dat, timeseries = TRUE, intervals = 15), the second one can be removed.\r\n\r\nDONE\r\n\r\n>When running the command get_chirps, I get the following warning message: In st_buffer.sfc(st_geometry(x), dist, nQuadSegs, endCapStyle = endCapStyle,: st_buffer does not correctly buffer longitude/latitude data. Can this warning be eliminated? Maybe adding a note in the documentation? \r\n\r\nThese warning messages comes from 'sf', but we don't know if is a good practice to suppress that. We added a note to the documentation\r\n\r\n>When running the command get_chirps, I get the following warning message: Warning messages: 1: In st_centroid.sfc(x$geometry) : st_centroid does not give correct centroids for longitude/latitude data. 2: In st_centroid.sfc(x$geometry) : st_centroid does not give correct centroids for longitude/latitude data. Can this warning be eliminated? Maybe adding a note in the documentation? \r\n\r\nSame as above\r\n\r\n>more in-depth discussion of the functionalities included in the package will make it easier for the reader to understand if the chirps dataset is suitable for a given purpose. I would also mention that requests may take a long time to be executed. Is it feasible to use these functions to download large amount of data (for instance to perform a global scale analysis)? In general, a mention of the limitations of this package would be valuable. \r\n\r\nWe added a section for the package limitations. And a better explanation about CHIRPS application into the paper. Also, W. Ashmall says here https://github.com/agrobioinfoservices/chirps/issues/12 that they are planning to upgrade the API service which will make it better to request queries to ClimateSERV.\r\n\r\n# LICENSE = GPL-3\r\n>when you use a widely known license you should not need to add a copy of the license to your repo. The files LICENSE and LICENSE.md are redundant and can be removed. \r\n\r\nDONE\r\n\r\n>When I got my packages reviewed I was made aware that GPL-3 is a strongly protective license and, if you want your package to be used widely (also commercially), MIT or Apache licenses are more suitable. I just wanted to pass on this very valuable suggestion I received. \r\n\r\nThank you, we changed to MIT as suggested\r\n\r\n# inst/paper folder:\r\n\r\n>it seems the code in your paper is only visualised but not executed. In this case, you could keep the paper.md and remove paper.Rmd. Also paper.pdf could be removed. \r\n\r\nDONE\r\n\r\n>Fig1.svg is redundant (Fig1.png is used for rendering the paper). \r\n\r\nDONE\r\n\r\n>in the paper, I would move the introduction to the CHIRPS data at the beginning as readers might not be familiar with these data. \r\n\r\nDONE\r\n\r\n>in the paper you use the command chirps:::tapajos to load data in your sysdata.rda. This is not good practice. The ::: operator should not be used as it exposes non-exported functionalities. If you move sysdata under the chrips/data folder (as suggested above), the dataset can be loaded using data(\"sysdata\") or chirps::tapajos. \r\n\r\nDONE\r\n\r\n>Towards the end of your paper you state: Overall, these indices proved to be an excellent proxy to evaluate the climate variability using precipitation data [#DeSousa2018], the effects of climate change [#Aguilar2005], crop modelling [#Kehel2016] and to define strategies for climate adaptation [#vanEtten2019]. Maybe you could expand a bit, perhaps on the link with crop modelling? \r\n\r\nWe updated this section with more examples, and hopefully a better explanation on CHIRPS applications and how *chirps* can help\r\n\r\n# goodpractice::goodpractice():\r\n\r\n>write unit tests for all functions, and all package code in general. 34% of code lines are covered by test cases. This differs from what is stated on GitHub (codecv badge = ~73% code coverage). The reason might be due to the fact you skip most of your tests on cran, is this because tests take too long to run? If so, is there a way you could modify the tests so that they take less time?\r\n\r\nSame as above in the tests section\r\n\r\n>fix this R CMD check WARNING: Missing link or links in documentation object 'precip_indices.Rd': ‘[tidyr]{pivot_wider}’ See section 'Cross-references' in the 'Writing R Extensions' manual. Maybe you could substitute \\code{\\link[tidyr]{pivot_wider}} with \\code{tidyr::pivot_wider()}. \r\n\r\nThe code was removed from seealso in the documentation.\r\n\r\nAgain, thank you for your time reviewing this package. We hope you like its new version.\r\n\r\n\r\n\r\n\r\n" ,
id = 582430287L, number = 357), row.names = 48L, class = "data.frame")
Issue: Split each row into multiple rows, according to condition. My conditions:
By paragraph (\r\n\r\n)
If a paragraph starts with > keep it with the following one in the same row.
I did the following to achieve (1), but the problem is the result does not fulfil condition (2):
# This is what I did for condition (1)
split <- tidyr::separate_rows(df, "body", sep = "\r\n\r\n")
For example, for condition (2) I will want the following two paragraphs to remain together in the same row:
>When running the command get_chirps, I get the following warning message: Warning messages: 1: In st_centroid.sfc(x$geometry) : st_centroid does not give correct centroids for longitude/latitude data. 2: In st_centroid.sfc(x$geometry) : st_centroid does not give correct centroids for longitude/latitude data. Can this warning be eliminated? Maybe adding a note in the documentation?
Same as above
Question: How can I split each row, with both conditions?
You can separate the rows based on "\r\n\r\n", assign an id based on whether the following row begins with ">", then collapse by id:
library(tidyr)
library(dplyr)
library(stringr)
separate_rows(df, "body", sep = "\r\n\r\n") %>%
mutate(id = cumsum(str_detect(lag(body, default = ""), "^>", negate = TRUE))) %>%
group_by_at(vars(-body)) %>%
summarise(body = str_flatten(body, "\n"))
# A tibble: 30 x 5
# Groups: issue_url, user, id [30]
issue_url user id number body
<chr> <chr> <int> <dbl> <chr>
1 https://api.github.com/repos/ropensci/softw~ kauedeso~ 1 357 "Dear #cvitolo thank you so much for your comments and suggestions. It helped a lot! We have worked in incorporating th~
2 https://api.github.com/repos/ropensci/softw~ kauedeso~ 2 357 "# general comments"
3 https://api.github.com/repos/ropensci/softw~ kauedeso~ 3 357 ">README, it seems the code in your README file is only visualised but not executed. In this case, you could keep the R~
4 https://api.github.com/repos/ropensci/softw~ kauedeso~ 4 357 "> Your R folder contains a file called sysdata.rda. Is there a reason to keep these data there? Usually data should be~
5 https://api.github.com/repos/ropensci/softw~ kauedeso~ 5 357 "> the man folder contains a figure folder. Is this good practice? I thought the man folder should be used only for doc~
6 https://api.github.com/repos/ropensci/softw~ kauedeso~ 6 357 "#\ttests folder:"
7 https://api.github.com/repos/ropensci/softw~ kauedeso~ 7 357 "> each of your test files contain a call to library(chirps). This is superflous because you load the package in testth~
8 https://api.github.com/repos/ropensci/softw~ kauedeso~ 8 357 "> When you call library() sometimes you use library(package_name), other times library(\"package_name\"). I would sugg~
9 https://api.github.com/repos/ropensci/softw~ kauedeso~ 9 357 "> test-get_chirps.R: you only test that you get the right object class but do not test the returned values, is there a~
10 https://api.github.com/repos/ropensci/softw~ kauedeso~ 10 357 ">test-get_esi.R: as above, you only test that you get the right object class but do not test the returned values. I wo~
# ... with 20 more rows

openmap NullPointerException Error in osmtile could not obtain tile

I am trying to plot a small rectangle of a map:
library(OpenStreetMap)
upper_left <- c(47.413, 8.551);
lower_right <- c(47.417, 8.556);
map_osm <- openmap(upper_left, lower_right, type = 'osm' );
plot(map_osm );
When I run that, the openmap function gives me the error Error in osmtile(x%%nX, y, zoom, type) : could not obtain tile: 540 298 10.
The documentation of OpenStreetMap seems to indicate that I need to add an API key. However, I am not sure how exactly I would do that (because I use type='osm', not type = url) and I am also unclear where I'd get such an API key from.
The java.lang.NullPointerException and the following R-error (Error in osmtile(...)) seem to come from an older version of OpenStreetMap.
By updating OpenStreetMap to the latest version (0.3.4 currently), the error disappears and the example code of OP should work as it is, without needing an API key.
The accepted answer is not adequate as the error can occur even with the most recent package version.
Sometimes if a particular area is not available in a specific style, you get an error similar to the one mentioned above independent of the package version. The solution would be to try the function with a different style. This is mentioned in the following blog post
As an example, the following modification may solve the issue:
library(OpenStreetMap)
upper_left <- c(47.413, 8.551);
lower_right <- c(47.417, 8.556);
map_osm <- openmap(upper_left, lower_right, type = 'opencyclemap');
plot(map_osm)

Python LDA gensim "DeprecationWarning: invalid escape sequence"

I am new to stackoverflow and python so please bear with me.
I am trying to run an Latent Dirichlet Analysis on a text corpora with the gensim package in python using PyCharm editor. I prepared the corpora in R and exported it to a csv file using this R command:
write.csv(testdf, "C://...//test.csv", fileEncoding = "utf-8")
Which creates the following csv structure (though with much longer and already preprocessed texts):
,"datetimestamp","id","origin","text"
1,"1960-01-01","id_1","Newspaper1","Test text one"
2,"1960-01-02","id_2","Newspaper1","Another text"
3,"1960-01-03","id_3","Newspaper1","Yet another text"
4,"1960-01-04","id_4","Newspaper2","Four Five Six"
5,"1960-01-05","id_5","Newspaper2","Alpha Bravo Charly"
6,"1960-01-06","id_6","Newspaper2","Singing Dancing Laughing"
I then try the following essential python code (based on the gensim tutorials) to perform simple LDA analysis:
import gensim
from gensim import corpora, models, similarities, parsing
import pandas as pd
from six import iteritems
import os
import pyLDAvis.gensim
class MyCorpus(object):
def __iter__(self):
for row in pd.read_csv('//mpifg.local/dfs/home/lu/Meine Daten/Imagined Futures and Greek State Bonds/Topic Modelling/Python/test.csv', index_col=False, header = 0 ,encoding='utf-8')['text']:
# assume there's one document per line, tokens separated by whitespace
yield dictionary.doc2bow(row.split())
if __name__ == '__main__':
dictionary = corpora.Dictionary(row.split() for row in pd.read_csv(
'//.../test.csv', index_col=False, encoding='utf-8')['text'])
print(dictionary)
dictionary.save(
'//.../greekdict.dict') # store the dictionary, for future reference
## create an mmCorpus
corpora.MmCorpus.serialize('//.../greekcorpus.mm', MyCorpus())
corpus = corpora.MmCorpus('//.../greekcorpus.mm')
dictionary = corpora.Dictionary.load('//.../greekdict.dict')
corpus = corpora.MmCorpus('//.../greekcorpus.mm')
# train model
lda = gensim.models.ldamodel.LdaModel(corpus=corpus, id2word=dictionary, num_topics=50, iterations=1000)
I get the following error codes and the code exits:
...\Python\venv\lib\site-packages\setuptools-28.8.0-py3.6.egg\pkg_resources_vendor\pyparsing.py:832: DeprecationWarning: invalid escape sequence \d
\...\Python\venv\lib\site-packages\setuptools-28.8.0-py3.6.egg\pkg_resources_vendor\pyparsing.py:2736: DeprecationWarning: invalid escape sequence \d
\...\Python\venv\lib\site-packages\setuptools-28.8.0-py3.6.egg\pkg_resources_vendor\pyparsing.py:2914: DeprecationWarning: invalid escape sequence \g
\...\Python\venv\lib\site-packages\pyLDAvis_prepare.py:387:
DeprecationWarning:
.ix is deprecated. Please use
.loc for label based indexing or
.iloc for positional indexing
I cannot find any solution and to be honest neither have any clue where exactly the problem comes from. I spent hours making sure that the encoding of the csv is utf-8 and exported (from R) and imported (in python) correctly.
What am I doing wrong or where else could I look at? Cheers!
DeprecationWarining is exactly that - warning about a feature being deprecated which is supposed to prompt the user to use some other functionality instead to maintain the compatibility in the future. So in your case I would just watch for the update of libraries that you use.
Starting with the last warning it look like it is originating from pandas and has been logged against pyLDAvis here.
The remaining ones come from pyparsing module but it does not seem that you are importing it explicitly. Maybe one of the libraries you use has a dependency and uses some relatively old and deprecated functionality. To eradicate the warning for the start I would check if upgrading does not help. Good luck!
import warnings
warnings.filterwarnings("ignore")
pyLDAvis.enable_notebook()
Try using this

Kindly check the R command

I am doing following in Cooccur library in R.
> fb<-read.table("Fb6_peaks.bed")
> f1<-read.table("F16_peaks.bed")
everything is ok with the first two commands and I can also display the data:
> fb
> f1
But when I give the next command as given below
> explore_pairs(c("fb", "f1"))
I get an error message:
Error in sum(sapply(tf1_s, score_sample, tf2_hits = tf2_s, hit_list = hit_l)) :
invalid 'type' (list) of argument
Could anyone suggest something?
Despite promising to release a version to the Bioconductor depository in the article the authors published over a year ago, they have still not delivered. The gz file that is attached to the article is not of a form that my installation recognizes. Your really should be corresponding with the authors for this question.
The nature of the error message suggests that the function is expecting a different data class. You should be looking at the specification for the arguments in the help(explore_pairs) file. If it is expecting 2 matrices, then wrapping data.matrix around the arguments may solve the problem, but if it is expecting a class created by one of that packages functions then you need to take the necessary step to construct the right objects.
The help file for explore_pairs does exist (at least in the MAN directory) and says the first argument should be a character vector with further provisos:
\arguments{
\item{factornames}{an vector of character strings, each naming a GFF-like
data frame containing the binding profile of a DNA-binding factor.
There is also a load utility, load_GFF, which I assume is designed for creation of such files.
Try rename your data frame:
names(fb)=c("seq","start","end")
Check the example datasets. The column names are as above. I set the names and it worked.

Resources