My code is below.
V = ntuple(x -> zeros(5, 5), 2)
V1 = rand(5,5)
copy!(V[1], V1)
I would like to replace all the values in V[1] by V1. copy! works well in Julia 0.6.3. However, it doesn't work in Julia 1.0.1.
Error message: MethodError: no method matching copy!(::Array{Float64,2}, ::Array{Float64,2})
I really appreciate your help.
Use .=:
V = ntuple(x -> zeros(5, 5), 2)
V1 = rand(5,5)
V[1] .= V1
It will make a copy of the values of V1 in V[1].
Related
I am still new to R programming and I just have no idea how to write this same code below from python to R.
human_data is dataframe from CSV file. the word includes sequence of letters. Basically, I want to convert my 'word' column sequence of string into all possible k-mer words of length 6.
def getKmers(sequence, size=6):
return [sequence[x:x+size] for x in range(len(sequence) - size + 1)]
human_data['words'] = human_data.apply(lambda x: getKmers(x['sequence']), axis=1)
You could use the library quanteda too, in order to compute the k-mers (k-grams), the following code shows an example:
library(quanteda)
k = 6 # 6-mers
human_data = data.frame(sequence=c('abcdefghijkl', 'xxxxyyxxyzz'))
human_data$words <- apply(human_data, 1,
function(x) char_ngrams(unlist(tokens(x['sequence'],
'character')), n=k, concatenator = ''))
human_data
# sequence words
#1 abcdefghijkl abcdef, bcdefg, cdefgh, defghi, efghij, fghijk, ghijkl
#2 xxxxyyxxyzz xxxxyy, xxxyyx, xxyyxx, xyyxxy, yyxxyz, yxxyzz
I hope this helps, using R basic commands:
df = data.frame(words=c('asfdklajsjahk', 'dkajsadjkfggfh', 'kfjlhdaDDDhlw'))
getKmers = function(sequence, size=6) {
kmers = c()
for (x in 1:(nchar(sequence) - size + 1)) {
kmers = c(kmers, substr(sequence, x, x+size-1))
}
return(kmers)
}
sapply(df$words, getKmers)
In an effort to test the working of featuretools, I installed featuretoolsR through RStudio,and installed numpy and featuretools in Python.
However on trying to create an entitiy following error is coming
# Libs
library(featuretoolsR)
library(magrittr)
# Create some mock data
set_1 <- data.frame(key = 1:100, value = sample(letters, 100, T))
set_2 <- data.frame(key = 1:100, value = sample(LETTERS, 100, T))
# Create entityset
es <- as_entityset(set_1, index = "key", entity_id = "set_1", id = "demo")```
Error: lexical error: invalid char in json text.
WARNING: The conda.compat modul
(right here) ------^
Kindly help in diagnosing and providing solution to same.
The same warning happened to me after updating to conda version 4.6.11. I think the problem is generated because of the print statement at the end of the compat.py script. I know this is not a great fix but I accessed the compat.py file and removed the print statement:
print("WARNING: The conda.compat module is deprecated and will be removed in a future release.", file=sys.stderr)
The file should be located here: \Anaconda3\pkgs\conda-4.6.11-py37_0\Lib\site-packages\conda
I hope it helps.
I am making a neural network in R so I can predict future data.
Firstly, I made a function that makes the layers:
add_layer <- function(x, in_size, out_size, act_function){
w = tf$V=variable(tf$random_normal(shape(in_size, out_size)))
b = tf$variable(tf$random_normal(shape(1, out_size)))
wxb = tf$matmul(x,w)+ b
y = act_function(wxb)
return(y)
}
Then, I create the layers. For now, I create 2 layers:
x = tf$placeholder(tf$float32, shape(NULL,31))
ty = tf$placeholder(tf$float32, shape(NULL, 2))
#First layer
l1 = add_layer(x, 31, 10, tf$nn$relu)
#Second layer, result is 0(false) or 1(true)
l = add_layer(l1, 10,2, tf$nn$sotfmax)
But then there is an error when I make layer l1 and layer l:
AttributeError: module 'tensorflow' has no attribute 'variable'
The problem is, when I remove in_size or out_size, it gives me the error that these are missing. Do I add these two then it gives me this error. After filling all the parameters(in_size, out_size, x and the activation function) it still gives me variable missing as seen above.
Any suggestions how to solve this?
Edit: Changed capital letter v, but result is still the same
I am trying to create a word cloud for bi-gram (and higher n grams) using the below code -
text_input <- scan("Path/Wordcloud.txt")
corpus <- Corpus(VectorSource(text_input))
corpus.ng = tm_map(corpus,removeWords,c(stopwords(),"s","ve"))
corpus.ng = tm_map(corpus.ng,removePunctuation)
corpus.ng = tm_map(corpus.ng,removeNumbers)
BigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 2, max = 2))
tdm.bigram = TermDocumentMatrix(corpus.ng,control = list(tokenize = BigramTokenizer))
tdm.bigram
freq = sort(rowSums(as.matrix(tdm.bigram)),decreasing = TRUE)
freq.df = data.frame(word=names(freq), freq=freq)
head(freq.df, 20)
pal=brewer.pal(8,"Blues")
pal=pal[-(1:3)]
wordcloud(freq.df$word,freq.df$freq,max.words=100,random.order = F, colors=pal)
I have seen similar code on few websites being used for generating n gram but I am getting only single word combinations in my output.
The code is not responding to changes in min and max being set to different values (2,3,4 etc) successively in the NGramTokenizer function.
Am I missing something in the code or is it possible that one of the libraries which I am calling in the code (tm,ggplot2,wordcloud,RWeka) or their dependencies (like rJava) is not responding? I will be really grateful if someone can throw some pointers regarding this issue or suggest modifications in the above code.
Thanks,
Saibal
You are missing out on mentioning the token delimiter.
token_delim <- " \\t\\r\\n.!?,;\"()"
BigramTokenizer <- NGramTokenizer(mycorpus, Weka_control(min=2,max=2, delimiters = token_delim))
This should work.
In case you need a working example, you can check this five-minute video:
https://youtu.be/HellsQ2JF2k
Hope this helps.
Also, some others have had problems using the Corpus function.
Try using the volatile corpus
corpus <- VCorpus(VectorSource(text_input))
I tried the following and it worked:
> minfreq_bigram<-2
> bitoken <- NGramTokenizer(corpus, Weka_control(min=2,max=2))
> two_word <- data.frame(table(bitoken))
> sort_two <- two_word[order(two_word$Freq,decreasing=TRUE),]
> wordcloud(sort_two$bitoken,sort_two$Freq,random.order=FALSE,scale =
c(2,0.35),min.freq = minfreq_bigram,colors = brewer.pal(8,"Dark2"),max.words=150)
I have run into a recurring issue when using the car package recode function. If I recreate a publicly used example (http://susanejohnston.wordpress.com/2012/07/18/find-and-replace-in-r-part-1-recode-in-the-library-car/)
and do:
y <- sample(c("Perch", "Goby", "Trout", "Salmon"), size = 10, replace = T)
y1 <- recode(y, "c("Perch", "Goby") = "Perciform" ; c("Trout", "Salmon") = "Salmonid"")
It returns:
Error: unexpected symbol in "y1 <- recode(y, "c("Perch"
I am running R 3.1.0 and using car_2.0-22
I assume that the author of the page was able to complete their action posted, but I can't recreate it and it is the same issue I have in my data. Thoughts?
I was the author of the wordpress document - code is wrong and thanks for flagging the issue.
Problem is that car::recode syntax requires a single quote rather than a double quote (or see comment from #MrFlick below on other acceptable syntax).
y1 <- recode(y, 'c("Perch", "Goby") = "Perciform" ; c("Trout", "Salmon") = "Salmonid"')
y1
[1] "Perciform" "Salmonid" "Perciform" "Salmonid" "Salmonid" "Perciform" "Salmonid" "Perciform"
[9] "Salmonid" "Perciform"
Should work.