Related
I'm trying to fetch Toll costs data (cost, TollCost groups from the Json response). I'm making the call to https://route.api.here.com/routing/7.2/calculateroute.json with the following parameters
alternatives: 0
currency: EUR
rollup: none,total,country
mode: fastest;truck;traffic:disabled
driver_cost: 0
vehicle_cost: 0.46
vehicleCostOnFerry: 0
routeAttributes: none,no,wp,lg,sc
jsonAttributes: 41
maneuvreAttributes: none
linkAttributes: none,sh
legAttributes: none,li
cost_optimize: 1
metricsystem: metric
truckRestrictionPenalty: soft
tollVehicleType: 3
trailerType: 2
trailersCount: 1
vehicleNumberAxles: 2
trailerNumberAxles: 3
hybrid: 0
emissionType: 6
fuelType: diesel
height: 4
length: 16.55
width: 2.55
trailerHeight: 4
vehicleWeight: 16
limitedWeight: 40
weightperaxle: 10
disabledEquipped: 0
passengersCount: 2
tiresCount: 12
commercial: 1
detail: 1
heightAbove1stAxle: 3.5
waypoint0: geo!stopOver!46.8036700000,19.3648579000;;null
waypoint1: geo!stopOver!48.1872046178,14.0647109247;;null
waypoint2: geo!stopOver!48.0690426000,16.3346156000;;null
Based on the documentation (https://developer.here.com/documentation/fleet-telematics/dev_guide/topics/calculation-considerations.html), it should be enough to add the tollVehicleType parameter.
For sure I'm missing something, but would be very grateful for any support. Thank you.
If you have a problem do not use 157 different arguments for an API but only use the minimum required ones to get a result. A minimum working reproduceable example. Then add the additional arguments.
And try to omit all other problematic factors - in this example I just pasted the GET request into the address line of your browser and looked at the output.
So you have no possible interference of any kind by the programming language.
I just did this - registered for HERE API got the api key - looked at the API (never did anything with HERE) and 4 minutes later I got the toll cost.. :-)
Please replace the XXXXXXXXXXX by your own key
https://fleet.ls.hereapi.com/2/calculateroute.json?apiKey=XXXXXXXXXXXXXXXXXXXXXX
&mode=fastest;truck;traffic:disabled
&tollVehicleType=3
&waypoint0=50.10992,8.69030
&waypoint1=50.00658,8.29096
and look what it returned AT THE END of 5 pages JSON (lots of JSON before)
"cost":{"totalCost":"4.95","currency":"EUR","details":{"driverCost":"0.0","vehicleCost":"0.0",
"tollCost":"4.95","optionalValue":"0.0"}},
"tollCost":{"onError":false}}],"warnings":[{"message":"No vehicle height specified, assuming 380 cm","code":1},{"message":"No vehicle weight specified, assuming 11000 kg","code":1},{"message":"No vehicle total weight specified, assuming 11000 kg","code":1},{"message":"No vehicle height above first axle specified, assuming 380 cm","code":1},{"message":"No vehicle total length specified, assuming 1000 cm","code":1}],"language":"en-us"}}
HERE ARE YOUR TOLL COST :-)
(NOTE: Keep in mind that there is probably no toll for the given car type and country!
I also got a cost result with a car but it was 0,- because there is no toll for cars in Germany (only trucks) where the example waypoints are located. )
I am having trouble with the findCorrelation() function, Here is my input and the output.
>findCorrelation(cor(subset(segdata, select=-c(56))),cutoff=0.9)
[1] 16 17 14 15 30 51 31 25 40
>cor(segdata)[c(16,17,14,15,30,51,31,25,40),c(16,17,14,15,30,51,31,25,40)]
enter image description here
I deleted the 56 colum because this is factor variable.
Above the code, I use cutoff=0.9. it means print only those variables whose correlation is greater than or equal to 0.9.
But, in the result image file, the end variable(P12002900) has very very low correlation. As i use "cutoff=0.9", Low correlations such as P12002900 should not be output.
why this is printed??
so I use Vehicle bigdata that presented in R.
>library(mlbench)
>library(caret)
>data(Vehicle)
>findCorrelation(cor(subset(Vehicle,select=-c(Class))),cutoff=0.9)
[1]3 8 11 7 9 2
>cor(subset(Vehicle,select=-c(Class)))[c(3,8,11,7,9,2),c(3,8,11,7,9,2)]
this is result image.
enter image description here
the last variable(Circ) has lower than 0.9 correlation.
but it is printed....
please help me... thanks you for your help!
Using the following dataset:
ID=c(1:24)
COST=c(85,109,90,104,107,87,99,95,82,112,105,89,101,93,111,83,113,81,97,97,91,103,86,108)
POINTS=c(113,96,111,85,94,105,105,95,107,88,113,100,96,89,89,93,100,92,109,90,101,114,112,109)
mydata=data.frame(ID,COST,POINTS)
I need a R function that will consider all combinations of rows where the sum of 'COST' is less than a fixed value - in this case, $500 - and make the optimal selection based on the summed 'POINTS'.
Your help is appreciated.
So since this post is still open I thought I would give my solution. These kinds of problems are always fun. So, you can try to brute force the solution by checking all possible combinations (some 2^24, or over 16 million) one by one. This could be done by considering that for each combination, a value is either in it or not. Thinking in binary you could use the follow function code which was inspired by this post:
#DO NOT RUN THIS CODE
for(i in 1:2^24)
sum_points[i]<-ifelse(sum(as.numeric((intToBits(i)))[1:24] * mydata$COST) < 500,
sum(as.numeric((intToBits(i)))[1:24] * mydata$POINTS),
0)
I estimate this would take many hours to run. Improvements could be made with parallelization, etc, but still this is a rather intense calculation. This method will also not scale very well, as an increase by 1 (to 25 different IDs now) will double the computation time. Another option would be to cheat a little. For example, we know that we have to stay under $500. If we added up the n cheapest items, at would n would we definitely be over $500?
which(cumsum(sort(mydata$COST))>500)
[1] 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
So any more than 5 IDs chosen and we are definitely over $500. What else.
Well we can run a little code and take the max for that portion and see what that tells us.
sum_points<-1:10000
for(i in 1:10000)
sum_points[i]<-ifelse(sum(as.numeric((intToBits(i)))[1:24]) <6,
ifelse(sum(as.numeric((intToBits(i)))[1:24] * mydata$COST) < 500,
sum(as.numeric((intToBits(i)))[1:24] * mydata$POINTS),
0),
0)
sum_points[which.max(sum_points)]
[1] 549
So we have to try to get over 549 points with the remaining 2^24 - 10000 choices. But:
which(cumsum(rev(sort(mydata$POINTS)))<549)
[1] 1 2 3 4
Even if we sum the 4 highest point values, we still dont beat 549, so there is no reason to even search those. Further, the number of choices to consider must be greater than 4, but less than 6. My gut feeling tells me 5 would be a good number to try. Instead of looking at all 16 millions choices, we can just look at all of the ways to make 5 out of 24, which happens to be 24 choose 5:
num<-1:choose(24,5)
combs<-combn(24,5)
sum_points<-1:length(num)
for(i in num)
sum_points[i]<-ifelse(sum(mydata[combs[,i],]$COST) < 500,
sum(mydata[combs[,i],]$POINTS),
0)
which.max(sum_points)
[1] 2582
sum_points[2582]
[1] 563
We have a new max on the 2582nd iteration. To retrieve the IDs:
mydata[combs[,2582],]$ID
[1] 1 3 11 22 23
And to verify that nothing went wrong:
sum(mydata[combs[,2582],]$COST)
[1] 469 #less than 500
sum(mydata[combs[,2582],]$POINTS)
[1] 563 #what we expected.
So this is the question.
Suppose you track your commute times for two weeks (10 days) and you find the following times in minutes
17 16 20 24 22 15 21 15 17 22
Suppose that the ‘24’ was a mistake, and it should have been 18. Write a code that fixes this, i.e. changing ‘24’ to ‘18’. Then compute for the new mean and standard deviation of the commute times.
Write a code which counts the number of instances that the commute time is at least 20 minutes. Then convert this into a percentage.
This is my solution for Q3 when I ran this code. I want to ask anybody if my solution is correct?
commute <- c(17,16,20,24,22,15,21,15,17,22)
commute[commute==24] <- 18
n <- length(commute)
sum((commute>=20)/n)
#[1] **0.4**
to complete the answer of the user20650, you could use a string formatted command to correctly display the outcome as a percentage as requested:
sprintf("%0.2f%%",100* mean(commute>=20))
[1] "40.00%"
I am having a tough time understanding the how to formulate code to a cutting stock problem. I have searched the web extensively and I see a lot of theory but no actual examples.
The majority of query results point to the wikipedia page: http://en.wikipedia.org/wiki/Cutting_stock_problem
13 patterns to be produced, with required amounts indicated alongside.
The machine produces by default a 5600 width piece to be cut into widths below. Goal is to minimize waste.
Widths/Required amount
1380 22
1520 25
1560 12
1710 14
1820 18
1880 18
1930 20
2000 10
2050 12
2100 14
2140 16
2150 18
2200 20
Would someone show me how to formulate this solution in R with lpsolve/lpsolve API?
stock=5600
widths=c(1380,1520,1560,1710,1820,1880,1930,2000,2050,2100,2140,2150,2200)
required=c(22,25,12,14,18,18,20,10,12,14,16,18,20)
library(lpSolveAPI)
...
solve(lprec)
get.variables(lprec)
You could model it as a Mixed Integer Problem and solve it using various techniques. Of course to generate variables (i.e. a valid pattern of widths) you need to use a suitable column generation method.
Have a look at this C++ project: https://code.google.com/p/cspsol
cspsol is based on GLPK API library, uses column generation and branch & bound to solve the MIP. It may give you some hints about how to do it in R.
Good luck !