information retrieval,precision and recall python - information-retrieval

Determine the average precision and recall for the ten queries and retrieve the top 10 documents.
for example:
ID of retrieve documents=[307 322 256 325 54 267 303 333 375 287]
ID of relevant documents=[11 99 100 307 54]
How to calculate precision and recall in payhton ?

Related

Indexing through 'names' in a list and performing actions on contained values in R

I have a data set of counts from standard solutions passed through an instrument that analyses chemical concentrations (an ICPMS for those familiar). The data is over a range of different standards and for each standard I have four repeat measurements that I want to calculate the mean and variance of.
I'm importing the data from an excel spreadsheet and then, following some housekeeping such as getting dates and times in the right format, I split the the dataset up into a list identified by the name of the standard solution using Count11.sp<-split(Count11.raw, Count11.raw$Type). Count11.raw$Type then becomes the list element name and I have the four count results for each chemical element in that list element.
So far so good.
I find I can yield an average (mean, median etc) easily enough by identifying the list element specifically i.e. mean(Count11.sp$'Ca40') , or sapply(Count11$'Ca40', median), but what I'm not able to do is automate that in a loop so that I can calculate the means for each standard and drop that into a numerical matrix for further manipulation. I can extract the list element names with names() and I can even use a loop to make a vector of all the names and reference the specific list element using these in a for loop.
For instance Count11.sp[names(Count11.sp[i])]will extract the full list element no problem:
$`Post Ca45t`
Type Run Date 7Li 9Be 24Mg 43Ca 52Cr 55Mn 59Co 60Ni
77 Post Ca45t 1 2011-02-08 00:13:08 114 26101 4191 453525 2632 520 714 2270
78 Post Ca45t 2 2011-02-08 00:13:24 114 26045 4179 454299 2822 524 704 2444
79 Post Ca45t 3 2011-02-08 00:13:41 96 26372 3961 456293 2898 520 762 2244
80 Post Ca45t 4 2011-02-08 00:13:58 112 26244 3799 454702 2630 510 792 2356
65Cu 66Zn 85Rb 86Sr 111Cd 115In 118Sn 137Ba 140Ce 141Pr 157Gd 185Re 208Pb
77 244 1036 56 3081 44 520625 78 166 724 10 0 388998 613
78 250 982 70 3103 46 526154 76 174 744 16 4 396496 644
79 246 1014 36 3183 56 524195 60 198 744 2 0 396024 612
80 270 932 60 3137 44 523366 70 180 824 2 4 390436 632
238U
77 24
78 20
79 14
80 6
but sapply(Count11.sp[names(count11.sp[i])produces an error message: Error in median.default(X[[i]], ...) : need numeric data
while sapply(Input$Post Ca45t, median) <'Post Ca45t' being name Count11.sp[i] i=4> does exactly what I want and produces the median value (I can clean that vector up later for medians that don't make sense) e.g.
Type Run Date 7Li 9Be 24Mg
NA 2.5 1297109612.5 113.0 26172.5 4070.0
43Ca 52Cr 55Mn 59Co 60Ni 65Cu
454500.5 2727.0 520.0 738.0 2313.0 248.0
66Zn 85Rb 86Sr 111Cd 115In 118Sn
998.0 58.0 3120.0 45.0 523780.5 73.0
137Ba 140Ce 141Pr 157Gd 185Re 208Pb
177.0 744.0 6.0 2.0 393230.0 622.5
238U
17.0
Can anyone give me any insight into how I can automate (i.e. loop through) these names to produce one median vector per list element? I'm sure there's just some simple disconnect in my logic here that may be easily solved.
Update: I've solved the problem. The way to do so is to use tapply on the original dataset with out the need to split it. tapply allows functions to be applied to data based on a user defined grouping criteria. In my case I could group according to the Count11.raw$Type and then take the mean of the data subset. tapply(Count11.raw$Type, Count11.raw[,3:ncol(Count11.raw)], mean), job done.

Equation for non linear data

I have a set of non linear data. The data is the X & Y coordinates of different objects/points in a video( that is the x&y pixel co-ordinates of same objects in all the frames in a video.) upon plotting the values in one frame, I am getting a nonlinear graph as shown in the picture.
I want to form an equation for this graph so that, if I have a known X coorrdinate in this frame, then the corresponding Y coordinate can be obtained using this equation.(kind of predicting the new position, I am not sure this idea is correct or not)
OR
If this idea is illogical, can you suggest something that will work so that I can predict the location of new object using these data.
Any help or new ideas is highly appreciated.
A sample of my data is given below:
X Y
----------
214 182
830 185
1451 173
219 554
1453 548
214 941
830 934
1455 942
213 190
829 193
1450 181
218 561
1452 555
214 945
830 938
1455 946
213 190
828 193
1451 182
219 560
1452 554
214 945
830 938
1455 946
213 190
829 193
1450 181
219 556
1453 550
215 936
830 929
1455 937
I have selected 9 objects in each frame, so the first 9 data set belongs to one frame, and so on..
Your XY data looks like this:
There are clusters located on corners and mid-edges.
and when the lines that connect successive points are added
The points should come in groups of 8, in the sequence shown above. You can predict the location of a point using the index
// predict location `(x,y)` of point based on index `i`
point = MOD(i-1,8)+1; // get number 1-8 of the point (as shown above)
select case point
case [1,4,6] : x = 215;
case [2,7] : x = 829;
case [3,5,8] : x = 1463;
end select
select case point
case [1,2,3] : y = 186;
case [4,5] : y = 555;
case [6,7,8] : y = 940;
end select
You have to cut this curve in lot of linear lines, so following the value of X, you will be on linear line and its easy to calculate the equation of line knowing 2 points of this line

R is not taking the parameter hgap in layout_with_sugiyama

I'm working on R on a graph and I'd like to have a hierarchical plot, based on the values in the vector S (a value for each node).
lay2 <- layout_with_sugiyama(grafo, attributes="all", layers = S, hgap=10, vgap=10)
plot(lay2$extd_graph, vertex.label.cex=0.5)
However, the paramaters hgap e vgap are not taken and the graph is really confused (even because I've got 162 nodes).
I'm doing something wrong or there is another way in which I can do a hierarchical graph?
I believe that layout_with_sugiyama is working just fine,
but you may be misinterpreting the output. Since you do
not provide any data, I will illustrate with some randomly
generated data.
library(igraph)
set.seed(1234)
grafo = erdos.renyi.game(162, 0.03)
lay2 <- layout_with_sugiyama(grafo, attributes="all",
hgap=10, vgap=10)
plot(lay2$extd_graph, vertex.label.cex=0.5, vertex.size=9)
I think the source of your question is the fact that the nodes
are a bit crowded together in the horizontal direction. But
that should be expected. Let's analyze the layout, starting
with the easy part, the vertical direction.
table(lay2$layout[,2])
1 11 21 31 41
24 82 42 13 1
You can see that vgap worked. The spacing is 10 units apart.
The second line up (y=11) has 82 nodes. Unless the nodes are
tiny, 82 nodes on a single, horizontal line will overlap.
But aren't they supposed to have spacing of at least 10?
They do! Let's look at that second line.
sort(lay2$layout[lay2$layout[,2]==11,1])
[1] -25 -15 -5 5 15 25 35 45 55 65 75 85 95 105 115 125 135 230
[19] 240 260 270 280 290 300 310 320 330 340 350 360 370 380 390 400 410 420
[37] 430 440 450 460 470 480 490 500 510 520 530 540 550 560 570 580 590 600
[55] 610 620 630 640 655 665 675 685 695 720 730 740 750 760 770 780 790 800
[73] 810 820 830 840 850 860 870 880 890 910
Looking at the whole graph, there is a slightly broader range.
range(lay2$layout[,1])
[1] -65 910
None of the numbers are less that 10 apart - as requested. hgap worked too!
However, what happens when you try to plot that? If you read the part of the
?igraph.plotting help page that refers to the parameter rescale,
you will see:
rescale:
Logical constant, whether to rescale the coordinates to the [-1,1]x-1,1 interval. Defaults to TRUE, the layout will be rescaled.
So the layout will be rescaled to a range of -1,1 and then plotted.
Scaled or not, you need to fit 82 nodes in a single, horizontal row,
so it is very difficult to avoid overlapping nodes.

Natural Neighbor Interpolation in R

I need to conduct Natural Neighbor Interpolation (NNI) via R in order to smooth my numeric data. For example, say I have very spurious data, my goal is to use NNI to model the data neatly.
I have several hundred rows of data (one observation for each postcode), alongside latitudes and longitudes. I've made up some data below:
Postcode lat lon Value
200 -35.277272 149.117136 7
221 -35.201372 149.095065 38
800 -12.801028 130.955789 27
801 -12.801028 130.955789 3
804 -12.432181 130.84331 29
810 -12.378451 130.877014 20
811 -12.376597 130.850489 3
812 -12.400091 130.913672 42
814 -12.382572 130.853877 32
820 -12.410444 130.856124 39
821 -12.426641 130.882367 39
822 -12.799278 131.131697 49
828 -12.474896 130.907378 38
829 -14.460879 132.280002 34
830 -12.487233 130.972637 8
831 -12.480066 130.984006 49
832 -12.492269 130.990891 29
835 -12.48138 131.029173 33
836 -12.525546 131.103025 40
837 -12.460094 130.842663 39
838 -12.709507 130.995407 28
840 -12.717562 130.351316 22
841 -12.801028 130.955789 8
845 -13.038663 131.072091 19
846 -13.226806 131.098416 50
847 -13.824123 131.835799 11
850 -14.464497 132.262021 2
851 -14.464497 132.262021 23
852 -14.92267 133.064654 36
854 -16.81839 137.14707 17
860 -19.648306 134.186642 3
861 -18.94406 134.318373 8
862 -20.231104 137.762232 28
870 -12.436101 130.84059 24
871 -12.436101 130.84059 16
Is there any kind of package that will do this? I should mention, that the only predictors I am using in this model are latitude and longitude. If there isn't a package than can do this, how can I implement it manually. I've searched extensively and I can't figure out how to implement this in R. I have seen one or two other SO posts, but they haven't assisted me in figuring this out.
Please let me know if there's anything I must add to the question. Thanks.
I suggest the following:
Reproject the data to the corresponding UTM Zone.
Use R WhiteboxTools package to process the data using natural neighbour interpolation.

adding and subtracting values in multiple data frames of different lengths - flow analysis

Thank you jakub and Hack-R!
Yes, these are my actual data. The data I am starting from are the following:
[A] #first, longer dataset
CODE_t2 VALUE_t2
111 3641
112 1691
121 1271
122 185
123 522
124 0
131 0
132 0
133 0
141 626
142 170
211 0
212 0
213 0
221 0
222 0
223 0
231 95
241 0
242 0
243 0
244 0
311 129
312 1214
313 0
321 0
322 0
323 565
324 0
331 0
332 0
333 0
334 0
335 0
411 0
412 0
421 0
422 0
423 0
511 6
512 0
521 0
522 0
523 87
In the above table, we can see the 44 land use CODES (which I inappropriately named "class" in my first entry) for a certain city. Some values are just 0, meaning that there are no land uses of that type in that city.
Starting from this table, which displays all the land use types for t2 and their corresponding values ("VALUE_t2") I have to reconstruct the previous amount of land uses ("VALUE_t1") per each type.
To do so, I have to add and subtract the value per each land use (if not 0) by using the "change land use table" from t2 to t1, which is the following:
[B] #second, shorter dataset
CODE_t2 CODE_t1 VALUE_CHANGE1
121 112 2
121 133 12
121 323 0
121 511 3
121 523 2
123 523 4
133 123 3
133 523 4
141 231 12
141 511 37
So, in order to get VALUE_t1 from VALUE_t2, I have, for instance, to subtract 2 + 12 + 0 + 3 + 2 hectares (first 5 values of the second, shorter table) from the value of land use type/code 121 of the first, longer table (1271 ha), and add 2 hectares to land type 112, 12 hectares to land type 133, 3 hectares to land type 511 and 2 hectares to land type 523. And I have to do that for all the land use types different than 0, and later also from t1 to t0.
What I have to do is a sort of loop that would both add and subtract, per each land use type/code, the values from VALUE_t2 to VALUE_t1, and from VALUE_t1 to VALUE_t0.
Once I estimated VALUE_t1 and VALUE_t0, I will put the values in a simple table showing the relative variation (here the values are not real):
CODE VALUE_t0 VALUE_t2 % VAR t2-t0
code1 50 100 ((100-50)/50)*100
code2 70 80 ((80-70)/70)*100
code3 45 34 ((34-45)/45)*100
What I could do so far is:
land_code <- names(A)[-1]
land_code
A$VALUE_t1 <- for(code in land_code{
cbind(A[1], A[land_code] - B[match(A$CODE_t2, B$CODE_t2), land_code])
}
If I use the loop I get an error, while if I take it away:
A$VALUE_t1 <- cbind(A[1], A[land_code] - B[match(A$CODE_t2, B$CODE_t2), land_code])
it works but I don't really get what I want to get... so far I was working on how to get a new column which would contain the new "add & subtract" values, but haven't succeeded yet. So I worked on how to get a new column which would at least match the land use types first, to then include the "add and subtract" formula.
Another problem is that, by using "match", I get a shorter A$VALUE_t1 table (13 rows instead of 44), while I would like to keep all the land use types in dataset A, because I will have then to match it with the table including VALUES_t0 (which I haven't shown here).
Sorry that I cannot do better than this at the moment... and I hope to have explained better what I have to do. I am extremely grateful for any help you can provide to me.
thanks a lot

Resources