How do I use ReactMarkdown with Deno's Fresh framework? - deno

I'm a deno/fresh newbie and really appreciate the help.
I'm importing ReactMarkup per the docs like this:
import ReactMarkdown from "https://esm.sh/react-markdown#7";
But it's unclear how to use it in the fresh/preact world. Naively I've tried the below, but that generates this error: TypeError: Cannot convert a Symbol value to a string...at E (https://esm.sh/v102/preact-render-to-string#5.2.4/X-ZS8q/deno/preact-render-to-string.js:12:1829).
<ReactMarkdown>
### hey there
</ReactMarkdown>

Related

Can´t import biom-file with import_biom in R

I have collected 6 fastq-files from the same mock-sample and I merged them using gzip in linux for further using it with Kraken2. The output-file from Kraken2 (.report) was converted to .biom-format using Kraken-biom in linux. When I then try to import the .biom-file into R using import_biom-package I receive the following message:
Error in validObject(.Object) : invalid class “phyloseq” object:
Component sample names do not match. Try sample_names()
I have opened the .biom-file and can only see one sample name (the one I called the output-file during gzip). I tried to use sample_names(), but cant do it since the .biom-file is not loaded into R. Do anyone know why the sample names do not match since I merged them to one, so should it not be one sample name?
Edit: When I run Kraken2 on the 6 fastq-files without merging them and then using kraken-biom, it works to import the .biom-file into R.

Plotting GTFS object routes with tidytransit in R

I need to plot General transit Feed Specification (GTFS) object routes and their frequencies. For this purpose I have run the following code from the package manual https://cran.r-project.org/web/packages/tidytransit/tidytransit.pdf
to get some practice. But although the code is taken from the manual, I do get the error below. Is there anyone who can clarify this issue and show me an alternative way to perform spatial analysis?
library(tidytransit)
local_gtfs_path <- system.file("extdata",
"google_transit_nyc_subway.zip",
package = "tidytransit")
nyc <- read_gtfs(local_gtfs_path,
local=TRUE)
plot(nyc)
Error in UseMethod("inner_join") :
no applicable method for 'inner_join' applied to an object of class "NULL"
thanks for posting this!
this happened because we made a change in the API and I think the docs you were looking at were out of sync. they should be up to date now. see http://tidytransit.r-transit.org/articles/introduction.html
also, we made a change so that the plot() function will work as specified in the old docs and in the new docs.

Python LDA gensim "DeprecationWarning: invalid escape sequence"

I am new to stackoverflow and python so please bear with me.
I am trying to run an Latent Dirichlet Analysis on a text corpora with the gensim package in python using PyCharm editor. I prepared the corpora in R and exported it to a csv file using this R command:
write.csv(testdf, "C://...//test.csv", fileEncoding = "utf-8")
Which creates the following csv structure (though with much longer and already preprocessed texts):
,"datetimestamp","id","origin","text"
1,"1960-01-01","id_1","Newspaper1","Test text one"
2,"1960-01-02","id_2","Newspaper1","Another text"
3,"1960-01-03","id_3","Newspaper1","Yet another text"
4,"1960-01-04","id_4","Newspaper2","Four Five Six"
5,"1960-01-05","id_5","Newspaper2","Alpha Bravo Charly"
6,"1960-01-06","id_6","Newspaper2","Singing Dancing Laughing"
I then try the following essential python code (based on the gensim tutorials) to perform simple LDA analysis:
import gensim
from gensim import corpora, models, similarities, parsing
import pandas as pd
from six import iteritems
import os
import pyLDAvis.gensim
class MyCorpus(object):
def __iter__(self):
for row in pd.read_csv('//mpifg.local/dfs/home/lu/Meine Daten/Imagined Futures and Greek State Bonds/Topic Modelling/Python/test.csv', index_col=False, header = 0 ,encoding='utf-8')['text']:
# assume there's one document per line, tokens separated by whitespace
yield dictionary.doc2bow(row.split())
if __name__ == '__main__':
dictionary = corpora.Dictionary(row.split() for row in pd.read_csv(
'//.../test.csv', index_col=False, encoding='utf-8')['text'])
print(dictionary)
dictionary.save(
'//.../greekdict.dict') # store the dictionary, for future reference
## create an mmCorpus
corpora.MmCorpus.serialize('//.../greekcorpus.mm', MyCorpus())
corpus = corpora.MmCorpus('//.../greekcorpus.mm')
dictionary = corpora.Dictionary.load('//.../greekdict.dict')
corpus = corpora.MmCorpus('//.../greekcorpus.mm')
# train model
lda = gensim.models.ldamodel.LdaModel(corpus=corpus, id2word=dictionary, num_topics=50, iterations=1000)
I get the following error codes and the code exits:
...\Python\venv\lib\site-packages\setuptools-28.8.0-py3.6.egg\pkg_resources_vendor\pyparsing.py:832: DeprecationWarning: invalid escape sequence \d
\...\Python\venv\lib\site-packages\setuptools-28.8.0-py3.6.egg\pkg_resources_vendor\pyparsing.py:2736: DeprecationWarning: invalid escape sequence \d
\...\Python\venv\lib\site-packages\setuptools-28.8.0-py3.6.egg\pkg_resources_vendor\pyparsing.py:2914: DeprecationWarning: invalid escape sequence \g
\...\Python\venv\lib\site-packages\pyLDAvis_prepare.py:387:
DeprecationWarning:
.ix is deprecated. Please use
.loc for label based indexing or
.iloc for positional indexing
I cannot find any solution and to be honest neither have any clue where exactly the problem comes from. I spent hours making sure that the encoding of the csv is utf-8 and exported (from R) and imported (in python) correctly.
What am I doing wrong or where else could I look at? Cheers!
DeprecationWarining is exactly that - warning about a feature being deprecated which is supposed to prompt the user to use some other functionality instead to maintain the compatibility in the future. So in your case I would just watch for the update of libraries that you use.
Starting with the last warning it look like it is originating from pandas and has been logged against pyLDAvis here.
The remaining ones come from pyparsing module but it does not seem that you are importing it explicitly. Maybe one of the libraries you use has a dependency and uses some relatively old and deprecated functionality. To eradicate the warning for the start I would check if upgrading does not help. Good luck!
import warnings
warnings.filterwarnings("ignore")
pyLDAvis.enable_notebook()
Try using this

Use module function instead of Base function Julia

Lets say I have a Julia module, MyModule, that defines a function called conv() for 1D convolution. I would like to be able to call this function via MyModule.conv() in a file that imports or uses the file. Example:
import MyModule.conv
MyModule.conv()
However, I cannot get this syntax to work; Julia still calls Base.conv() instead of MyModule.conv(). I have tried all the different styles of using and import but cannot get this syntax to work.
Is this functionality available in Julia? I feel like I have seen this be implemented in other Julia packages but cannot find an example that works.
EDIT
Current setup is as follows; there is no reference to conv() anywhere in ModA outside of the definition.
module ModA
function conv(a, b)
println("ModA.conv")
end
end
Then, in another file,
import ModA
conv(x, y) #still calls Base.conv()
SOLVED
This was entirely my fault. The import wasn't working because of an incorrect LOAD_PATH calling a different version of the file than I thought was being called (finding the first one in the LOAD_PATH, which was not the one that I was editing). Entirely my fault...
Do you mean something like this?
module ModA
conv() = println("Calling ModA.conv")
end
module ModB # Import just conv from ModA
using ..ModA.conv # Could have used import ..ModA.conv instead
conv() # if we want to be able to extend ModA.conv
end
module ModC # Import ModA and use qualified access to access ModA.conv
import ..ModA
ModA.conv()
end
You must make sure not to make any reference to conv inside ModA before you have defined your own function, or it will already have looked up Base.conv and associated the name conv with that.

Import the Quandl's harmonized data into r

I'm looking for a way to import the Quandl's harmonized data into r.
I tried to use Quandl package with:
Quandl("RAYMOND/MSFT_COSTOFREVENUE_Q", trim_start="2009-06-27", trim_end="2014-03-29")
but with no luck so I tried:
Quandl("RAYMOND/MSFT_COST_OF_REVENUE_Q", trim_start="2009-06-27", trim_end="2014-03-29")
The error that I got is:
Requested entity does not exist. This could mean the code does not
exist or the parameters you have passed have returned an empty
dataset.
Any Idea?
It was a typo on the page you were looking at.
The call is:
Quandl("RAYMOND/MSFT_COST_OF_REVENUE_TOTAL_Q", trim_start="2009-06-27", trim_end="2014-03-29")

Resources