I am trying to make a double nested loop in R. The source is stored in routes and looks like this:
Airline AirlineID SourceAirport SourceAirportID DestinationAirport DestinationAirportID Codeshare Stops Equipment
Where every row is a flight. I am concerned with the SourceAirportID and the DestinationAirportID. My double nested loop should have the first index as the SourceAirportID to access a list of DestinationPortIDs. The nested loop needs to be of variable size because airports will not all have the same number of destinations. Here is my attempt:
graph <- list()
for (i in 1:11922) {
graph[i] <- list()
}
it <- 1
sid <- as.numeric(routes[1,4])
for (i in 1:length(routes$SourceAirportID)) {
if (sid != as.numeric((routes[i,4]))) {
sid <- as.numeric(routes[i,4])
it <- 1
}
else {
it <- it + 1
}
graph[sid][it] <- routes[sid,6]
}
Here are the first 4 rows of routes:
Airline AirlineID SourceAirport SourceAirportID DestinationAirport DestinationAirportID Codeshare Stops Equipment
17313 CG 1308 GKA 1 HGU 3 0 DH8 DHT
17314 CG 1308 GKA 1 LAE 4 0 DH8
17315 CG 1308 GKA 1 MAG 2 0 DH8
17316 CG 1308 GKA 1 POM 5 0 DH8
So I'm trying to get the list at graph[1] to contain 3 4 2 5. Instead graph[1] is null, graph[2] contains 4, graph[3] contains 2 and graph[3] contains 5. My code also throws over 50 warnings, so clearly I am doing something very wrong.
Appreciate the help!
I'm not sure if this is what you want:
#reproduciable data:
routes <- data.frame(SourceAirport=c('a','a','b','c','a','b','b','c'),
DestinationAirport=c('Q','W','D','D','Q','E','R','T'))
View(routes)
graph <- list()
o <- unique(routes$SourceAirport)
#DestinationAirports of routes that share the same SourceAirport will be saved within an element in graph
for (i in 1:length(o)) {
graph[[i]] <- routes$DestinationAirport[routes$SourceAirport==o[i]]
}
Related
Below is my code:
my.dataset1<- data.frame(site=c(11,12,13,14),
season=c(21,22,23,24),
PH=c(1,2,3,4))
for i in names(my.dataset1){
for (j in nrow(my.dataset1)) {
print(my.dataset1$i[j])
}
}
What i want is that it can print the results:
11
12
13
14
21
22
23
24
1
2
3
4
what I actually get is
null
It does not work. I want to get the results just by for loop!
Loop syntax must be modified. names in the first line of loop should be also replaced with ncol(). This will work for you.
my.dataset1<- data.frame(site=c(11,12,13,14),
season=c(21,22,23,24),
PH=c(1,2,3,4))
for (i in 1:ncol(my.dataset1)){
for (j in 1:nrow(my.dataset1)) {
print(my.dataset1[j,i])
}
}
I'm trying to create a function search(atr, spec, products) to search with certain parameters from a table like this:
Zoom Size Resolution
1 12 98 3200
2 15 100 3500
3 10 120 3100
4 8 90 2500
Where if the attribute (atr) exists then it should show which product(s) have at least the specification written (spec).
This is what I have but it doesn't seem to be working:
my_search = function(atr, spec, products) {
#1 products = data.frame?
if(is.data.frame(products)) {
#2 does atr exist in products?
if(atr %in% names(products)) {
result = products[which(products$spec>=spec),]
return(result)
} else {
print("Sorry, there is no product with that attribute")
}
} else {
print("Sorry, the variable products has to be a data frame")
}
}
I am banging my head against the wall when trying to perform a drop duplicate for time series, base on the value of a datetime index.
My function is the following:
def csv_import_merge_T(f):
dfsT = [pd.read_csv(fp, index_col=[0], parse_dates=[0], dayfirst=True, names=['datetime','temp','rh'], header=0) for fp in files]
dfT = pd.concat(dfsT)
#print dfT.head(); print dfT.index; print dfT.dtypes
dfT.drop_duplicates(subset=index, inplace=True)
dfT.resample('H').bfill()
return dfT
which is called by:
inputcsvT = ['./input_csv/A08_KI_T*.csv']
for csvnameT in inputcsvT:
files = glob.glob(csvnameT)
print ('___'); print (files)
t = csv_import_merge_T(files)
print csvT
I receive the error
NameError: global name 'index' is not defined
what is wrong?
UPDATE:
The issue appear to arise when csv input files (which are to be concatenated) are overlapped.
inputcsvT = ['./input_csv/A08_KI_T*.csv'] gets files
A08_KI_T5
28/05/2015 17:00,22.973,24.021
...
08/10/2015 13:30,24.368,45.974
A08_KI_T6
08/10/2015 14:00,24.779,41.526
...
10/02/2016 17:00,22.326,41.83
and it runs correctly, whereas:
inputcsvT = ['./input_csv/A08_LR_T*.csv'] gathers
A08_LR_T5
28/05/2015 17:00,22.493,25.62
...
08/10/2015 13:30,24.296,44.596
A08_LR_T6
28/05/2015 17:00,22.493,25.62
...
10/02/2016 17:15,21.991,38.45
which leads to an error.
IIUC you can call reset_index and then drop_duplicates and then set_index again:
In [304]:
df = pd.DataFrame(data=np.random.randn(5,3), index=list('aabcd'))
df
Out[304]:
0 1 2
a 0.918546 -0.621496 -0.210479
a -1.154838 -2.282168 -0.060182
b 2.512519 -0.771701 -0.328421
c -0.583990 -0.460282 1.294791
d -1.018002 0.826218 0.110252
In [308]:
df.reset_index().drop_duplicates('index').set_index('index')
Out[308]:
0 1 2
index
a 0.918546 -0.621496 -0.210479
b 2.512519 -0.771701 -0.328421
c -0.583990 -0.460282 1.294791
d -1.018002 0.826218 0.110252
EDIT
Actually there is a simpler method is to call duplicated on the index and invert it:
In [309]:
df[~df.index.duplicated()]
Out[308]:
0 1 2
index
a 0.918546 -0.621496 -0.210479
b 2.512519 -0.771701 -0.328421
c -0.583990 -0.460282 1.294791
d -1.018002 0.826218 0.110252
I'm pretty new in statistics:
fisher = function(idxToTest, idxATI){
idxDependent=c()
dependent=c()
p = c()
for(i in c(1:length(idxToTest)))
{
tbl = table(data[[idxToTest[i]]], data[[idxATI]])
rez = fisher.test(tbl, workspace = 20000000000)
if(rez$p.value<0.1){
dependent=c(dependent, TRUE)
if(rez$p.value<0.1){
idxDependent = c(idxDependent, idxToTest[i])
}
}
else{
dependent = c(dependent, FALSE)
}
p = c(p, rez$p.value)
}
}
This is the function I use. It seems to work.
What I understood until now is that I have to pass as first parameter data like:
Men Women
Dieting 10 30
Non-dieting 5 60
My data comes from a CSV:
data = read.csv('***.csv', header = TRUE, sep=',');
My first problem is that I don't know how to converse from:
Loan.Purpose Home.Ownership
lp_value_1 ho_value_2
lp_value_1 ho_value_2
lp_value_2 ho_value_1
lp_value_3 ho_value_2
lp_value_2 ho_value_3
lp_value_4 ho_value_2
lp_value_3 ho_value_3
to:
ho_value_1 ho_value_2 ho_value_3
lp_value1 0 2 0
lp_value2 1 0 1
lp_value3 0 1 1
lp_value4 0 1 0
The second issue is that I don't know what the second parameter should be
POST UPDATE: This is what I get using fisher.test(myTable):
Error in fisher.test(test) : FEXACT error 501.
The hash table key cannot be computed because the largest key
is larger than the largest representable int.
The algorithm cannot proceed.
Reduce the workspace size or use another algorithm.
where myTable is:
MORTGAGE NONE OTHER OWN RENT
car 18 0 0 5 27
credit_card 190 0 2 38 214
debt_consolidation 620 0 2 87 598
educational 5 0 0 3 7
...
Basically, fisher tests only work on smallish data sets because they require alot of memory. But all is good because chi-square tests make minimal additional assumptions and are easier on the computer. Just do:
chisq.test(Loan.Purpose,Home.Ownership)
to get your p-values.
Make sure you read through and understand the help page for chisq.test, especially the examples at the bottom.
http://stat.ethz.ch/R-manual/R-patched/library/stats/html/chisq.test.html
Then look at a mosaicplot to see the quantities like:
mosaicplot(Loan.Purpose,Home.Ownership)
this reference explains how mosaicplots work.
http://alumni.media.mit.edu/~tpminka/courses/36-350.2001/lectures/day12/
Please I am a little new to this field so pardon me if the question sound trivial or basic.
I have a group of dataset(Bag of words to be specific) and I need to generate a proximity matrix by using their edit distance from each other to find and generate the proximity matrix .
I am however quite confused how I will keep track of my data/strings in the matrix. I need the proximity matrix for the purpose of clustering.
Or How generally do you approach this kinds of problem in the field. I am using perl and R to implement this.
Here is a typical code in perl I have written that reads from a text file containing my bag of words
use strict ;
use warnings ;
use Text::Levenshtein qw(distance) ;
main(#ARGV);
sub main
{
my #TokenDistances ;
my $Tokenfile = 'TokenDistinct.txt';
my #Token ;
my $AppendingCount = 0 ;
my #Tokencompare ;
my %Levcount = ();
open (FH ,"< $Tokenfile" ) or die ("Error opening file . $!");
while(<FH>)
{
chomp $_;
$_ =~ s/^(\s+)$//g;
push (#Token , $_ );
}
close(FH);
#Tokencompare = #Token ;
foreach my $tokenWord(#Tokencompare)
{
my $lengthoffile = scalar #Tokencompare;
my $i = 0 ;
chomp $tokenWord ;
##TokenDistances = levDistance($tokenWord , \#Tokencompare );
for($i = 0 ; $i < $lengthoffile ;$i++)
{
if(scalar #TokenDistances == scalar #Tokencompare)
{
print "Yipeeeeeeeeeeeeeeeeeeeee\n";
}
chomp $tokenWord ;
chomp $Tokencompare[$i];
#print $tokenWord. " {$Tokencompare[$i]} " . " $TokenDistances[$i] " . "\n";
#$Levcount{$tokenWord}{$Tokencompare[$i]} = $TokenDistances[$i];
$Levcount{$tokenWord}{$Tokencompare[$i]} = levDistance($tokenWord , $Tokencompare[$i] );
}
StoreSortedValues ( \%Levcount ,\$tokenWord , \$AppendingCount);
$AppendingCount++;
%Levcount = () ;
}
# %Levcount = ();
}
sub levDistance
{
my $string1 = shift ;
#my #StringList = #{(shift)};
my $string2 = shift ;
return distance($string1 , $string2);
}
sub StoreSortedValues {
my $Levcount = shift;
my $tokenWordTopMost = ${(shift)} ;
my $j = ${(shift)};
my #ListToken;
my $Tokenfile = 'LevResult.txt';
if($j == 0 )
{
open (FH ,"> $Tokenfile" ) or die ("Error opening file . $!");
}
else
{
open (FH ,">> $Tokenfile" ) or die ("Error opening file . $!");
}
print $tokenWordTopMost;
my %tokenWordMaster = %{$Levcount->{$tokenWordTopMost}};
#ListToken = sort { $tokenWordMaster{$a} cmp $tokenWordMaster{$b} } keys %tokenWordMaster;
##ListToken = keys %tokenWordMaster;
print FH "-------------------------- " . $tokenWordTopMost . "-------------------------------------\n";
#print FH map {"$_ \t=> $tokenWordMaster{$_} \n "} #ListToken;
foreach my $tokey (#ListToken)
{
print FH "$tokey=>\t" . $tokenWordMaster{$tokey} . "\n"
}
close(FH) or die ("Error Closing File. $!");
}
the problem is how can I represent the proximity matrix from this and still be able to keep track of which comparison represent which in my matrix.
In the RecordLinkage package there is the levenshteinDist function, which is one way of calculating an edit distance between strings.
install.packages("RecordLinkage")
library(RecordLinkage)
Set up some data:
fruit <- c("Apple", "Apricot", "Avocado", "Banana", "Bilberry", "Blackberry",
"Blackcurrant", "Blueberry", "Currant", "Cherry")
Now create a matrix consisting of zeros to reserve memory for the distance table. Then use nested for loops to calculate the individual distances. We end with a matrix with a row and a column for each fruit. Thus we can rename the columns and rows to be identical to the original vector.
fdist <- matrix(rep(0, length(fruit)^2), ncol=length(fruit))
for(i in seq_along(fruit)){
for(j in seq_along(fruit)){
fdist[i, j] <- levenshteinDist(fruit[i], fruit[j])
}
}
rownames(fdist) <- colnames(fdist) <- fruit
The results:
fdist
Apple Apricot Avocado Banana Bilberry Blackberry Blackcurrant
Apple 0 5 6 6 7 9 12
Apricot 5 0 6 7 8 10 10
Avocado 6 6 0 6 8 9 10
Banana 6 7 6 0 7 8 8
Bilberry 7 8 8 7 0 4 9
Blackberry 9 10 9 8 4 0 5
Blackcurrant 12 10 10 8 9 5 0
Blueberry 8 9 9 8 3 3 8
Currant 7 5 6 5 8 10 6
Cherry 6 7 7 6 4 6 10
The proximity or similarity (or dissimilarity) matrix is just a table that stores the similarity score for pairs of objects. So, if you have N objects, then the R code can be simMat <- matrix(nrow = N, ncol = N), and then each entry, (i,j), of simMat indicates the similarity between item i and item j.
In R, you can use several packages, including vwr, to calculate the Levenshtein edit distance.
You may also find this Wikibook to be of interest: http://en.wikibooks.org/wiki/R_Programming/Text_Processing