Im quiet confused.
I have 50 clusters each with a different size, and I have two variables "Year" and "Income level".
The data set I have right now has 10,000 rows where each row represents a single individual.
What I want to do is to form a new dataset from this dataframe where each row represents the number of clusters (50) and the columns be the two variables + the cluster variable. The problem is these two variables (that we call the study level covariates) do not have a unique value for clusters.
How would I put them in one cell for each cluster then?
X1<-c(1,1,1,2,2,2,2,2,3,3,4,4,4,4,4,4) #Clusters
X2<c(1,2,3,1,1,1,1,1,1,2,3,3,1,1,2,2,2) #Covariate1
X3<-c(1991,2001,2002,1998,2014,2015,1990,
2002,2004,2006,2006,2006,2005,2003,2003,2000) #Covariate2
data<-data.frame(X1,X2,X3)
My desire output should be something like this:
|Clusters|Covariet1|Covariate2|
|--------|---------|----------|
|1 | ? |? |
|2 | ? |? |
|3 | ? |? |
|4 | ? |? |
Meanening that instead of a data frame with 16 rows, a dataframe with 4 rows
Here is how to aggreagate the data using the average of the covariate per cluster:
df <- data.frame(X1 = c(1,1,1,2,2,2,2,2,3,3,4,4,4,4,4,4),
X2 = c(1,2,3,1,1,1,1,1,1,2,3,3,1,1,2,2),
X3 = c(1991,2001,2002,1998,2014,2015,1990,2002,2004,2006,2006,2006,2005,2003,2003,2000)
)
library(tidyverse)
df %>% group_by(X1) %>% summarise(mean_cov1 = mean(X2))
# A tibble: 4 x 2
X1 mean_cov1
* <dbl> <dbl>
1 1 2
2 2 1
3 3 1.5
4 4 2
For the case you are working on, you have to decide what the most relevant aggreagation is. You can probably also create multiple at once.
I have a datframe like the following:
group | amount_food | amount_finance | amount_clothes
A | 30 | 40 | 50
B | 34 | 43 | 53
C | 50 | 86 | 90
I would like to colour the contents of the cells depending on the value (a gradient of sorts where e.g. red would indicate higher and blue would indicate lower values etc). Similar to conditional formatting in excel. Ideally would like this done on a column by column basis, s i know which group has the highest amount_food etc.
How can i achieve this in R?
df <- read.csv("shopspend.csv")
new to R so any pointers helpful.
I have a dataset with items but with no user ratings.
Items have features (~400 feature).
I want to measure the similarity between items based on features (Row similarity).
I convert the item-feature into a binary matrix like the fowllowing
itemID | feature1 | feature2 | feature3 | feature4 ....
1 | 0 | 1 | 1 | 0
2 | 1 | 0 | 0 | 1
3 | 1 | 1 | 1 | 0
4 | 0 | 0 | 1 | 1
I don't know what to use (and how to use it) to measure the row similarity.
I want, for Item X, to get the top k similar items.
A sample code will be very appreciated
What you are looking for is termed similarity measure. A quick google/SO search will reveal various methods to get similarity between two vectors. Here is some sample code in python2 for cosine similarity:
from math import *
def square_rooted(x):
return round(sqrt(sum([a*a for a in x])),3)
def cosine_similarity(x,y):
numerator = sum(a*b for a,b in zip(x,y))
denominator = square_rooted(x)*square_rooted(y)
return round(numerator/float(denominator),3)
print cosine_similarity([3, 45, 7, 2], [2, 54, 13, 15])
taken from: http://dataaspirant.com/2015/04/11/five-most-popular-similarity-measures-implementation-in-python/
I noticed that you want top k similar items for every item. The best way to do that is with a k Nearest Neighbour implementation. What you can do is create a knn graph and return the top k similar items from the graph for a query.
A great library for this is nmslib. Here is some sample code for a knn query from the library for the HNSW method with cosine similarity (you can use one of the several available methods. HNSW is particularly efficient for your high dimensional data):
import nmslib
import numpy
# create a random matrix to index
data = numpy.random.randn(10000, 100).astype(numpy.float32)
# initialize a new index, using a HNSW index on Cosine Similarity
index = nmslib.init(method='hnsw', space='cosinesimil')
index.addDataPointBatch(data)
index.createIndex({'post': 2}, print_progress=True)
# query for the nearest neighbours of the first datapoint
ids, distances = index.knnQuery(data[0], k=10)
# get all nearest neighbours for all the datapoint
# using a pool of 4 threads to compute
neighbours = index.knnQueryBatch(data, k=10, num_threads=4)
At the end of the code, the k top neighbours for every data point will be stored in the neighbours variable. You can use that for your purposes.
I have a tabular data like:
+---+----+----+
| | a | b |
+---+----+----+
| P | 1 | 2 |
| Q | 10 | 20 |
+---+----+----+
and I want to represent this using a Dict.
With the column and row names:
x = ["a", "b"]
y = ["P", "Q"]
and data
data = [ 1 2 ;
10 20 ]
how may I create a dictionary object d, so that d["a", "P"] = 1 and so on? Is there a way like
d = Dict(zip(x,y,data))
?
Your code works with a minor change to use Iterators.product:
d = Dict(zip(Iterators.product(x, y), data.'))
To do this you need to add a line using Iterators to your project, and might need to Pkg.add("Iterators"). Because Julia matrices are column-major (elements are stored in order within columns, and columns are stored in order within the matrix), we needed to transpose the data matrix using the transpose operator .'.
This is a literal answer to your question. I don't recommend doing that. If you have tabular data, it's probably better to use a DataFrame. These are not two dimensional (rows have no names) but that can be fixed by adding an additional column, and using select.
I have a problem with the stem- and leaf-plot-function.
One example:
I want to stem the correlation coefficients of my meta-analysis. Here I have just 2 correlation coefficients (0,056 and -0,022).
I tried the following function:
y<-c(0.056, -0.022)
stem(y)
and I get the following result:
-2 | 2
-0 |
0 |
2 |
4 | 6
but thats not the right result, it has to be:
0 | 6
-0 | 2
So I don't understand which function I have to use to get the right result.
I would be realy thankful if somebody could help me!
Check out help(stem) and change the scale parameter to control the length of stem plot:
R > stem(y, scale = 2)
The decimal point is 2 digit(s) to the left of the |
-2 | 2
-1 |
-0 |
0 |
1 |
2 |
3 |
4 |
5 | 6
Does that make more sense?
The closest I get to your output is:
stem(y, scale=0.5, atom=0.1)
But it has the negative at the top instead of the bottom.
The first one that you show is a correct answer (the 0.04 and 0.05 stems are grouped together) even if not the desired answer.