I'm trying to fetch Toll costs data (cost, TollCost groups from the Json response). I'm making the call to https://route.api.here.com/routing/7.2/calculateroute.json with the following parameters
alternatives: 0
currency: EUR
rollup: none,total,country
mode: fastest;truck;traffic:disabled
driver_cost: 0
vehicle_cost: 0.46
vehicleCostOnFerry: 0
routeAttributes: none,no,wp,lg,sc
jsonAttributes: 41
maneuvreAttributes: none
linkAttributes: none,sh
legAttributes: none,li
cost_optimize: 1
metricsystem: metric
truckRestrictionPenalty: soft
tollVehicleType: 3
trailerType: 2
trailersCount: 1
vehicleNumberAxles: 2
trailerNumberAxles: 3
hybrid: 0
emissionType: 6
fuelType: diesel
height: 4
length: 16.55
width: 2.55
trailerHeight: 4
vehicleWeight: 16
limitedWeight: 40
weightperaxle: 10
disabledEquipped: 0
passengersCount: 2
tiresCount: 12
commercial: 1
detail: 1
heightAbove1stAxle: 3.5
waypoint0: geo!stopOver!46.8036700000,19.3648579000;;null
waypoint1: geo!stopOver!48.1872046178,14.0647109247;;null
waypoint2: geo!stopOver!48.0690426000,16.3346156000;;null
Based on the documentation (https://developer.here.com/documentation/fleet-telematics/dev_guide/topics/calculation-considerations.html), it should be enough to add the tollVehicleType parameter.
For sure I'm missing something, but would be very grateful for any support. Thank you.
If you have a problem do not use 157 different arguments for an API but only use the minimum required ones to get a result. A minimum working reproduceable example. Then add the additional arguments.
And try to omit all other problematic factors - in this example I just pasted the GET request into the address line of your browser and looked at the output.
So you have no possible interference of any kind by the programming language.
I just did this - registered for HERE API got the api key - looked at the API (never did anything with HERE) and 4 minutes later I got the toll cost.. :-)
Please replace the XXXXXXXXXXX by your own key
https://fleet.ls.hereapi.com/2/calculateroute.json?apiKey=XXXXXXXXXXXXXXXXXXXXXX
&mode=fastest;truck;traffic:disabled
&tollVehicleType=3
&waypoint0=50.10992,8.69030
&waypoint1=50.00658,8.29096
and look what it returned AT THE END of 5 pages JSON (lots of JSON before)
"cost":{"totalCost":"4.95","currency":"EUR","details":{"driverCost":"0.0","vehicleCost":"0.0",
"tollCost":"4.95","optionalValue":"0.0"}},
"tollCost":{"onError":false}}],"warnings":[{"message":"No vehicle height specified, assuming 380 cm","code":1},{"message":"No vehicle weight specified, assuming 11000 kg","code":1},{"message":"No vehicle total weight specified, assuming 11000 kg","code":1},{"message":"No vehicle height above first axle specified, assuming 380 cm","code":1},{"message":"No vehicle total length specified, assuming 1000 cm","code":1}],"language":"en-us"}}
HERE ARE YOUR TOLL COST :-)
(NOTE: Keep in mind that there is probably no toll for the given car type and country!
I also got a cost result with a car but it was 0,- because there is no toll for cars in Germany (only trucks) where the example waypoints are located. )
Related
I want to add more talent for character.
I add table character_talent data for character but it doesn't work.
Can you share me the steps?
Not dual specialization? 3 or 4 ?
And client supports this?
Depends on your question. If you mean more talents available for players to use, than you can do it in world configurations at the line:
Rate.Talent
Description: Talent point rate.
Default: 1
Rate.Talent = 1
Rate.Talent = 1 is the normal (default) starting at level 10 by giving you 1 talent point for each level, 71 in total.
Rate.Talent = 2 is the double rate. So if you get level 71 you will receive 142 talents points.
I'm hoping someone may be able to help with a problem I have - trying to solve using R.
Individuals can submit requests for items. The minimum number of requests per person is one. There is a recommended maximum of five, but people can submit more in exceptional circumstances. Each item can only be allocated one individual.
Each item has a 'desirability'/quality score ranging from 10 (high quality) down to 0 (low quality). The idea is to allocate items, in line with requests, such that as many high quality items as possible are allocated. It is less important that individuals have an equitable spread of requests met.
Everyone has to have at least one request met. Next priority is to look at whether we can get anyone who is over the recommended limit within it by allocating requests to others. After that the priority is to look at where the item would rank in each individual's request list based on quality score, and allocate to the person where it would rank highest (eg, if it would be first in someone's list and third in another's, give it to the former).
Effectively I'd need a sorting algorithm of some kind that:
Identifies where an item has been requested more than once
Check all the requests of everyone making said request
If that request is the only one a person has made, give it to them
(if this scenario applies to more than one person, it should be
flagged in some way)
If all requestees have made more than one request, check to see if
any have made more than five requests - if they have it can be taken
off them.
If all are within the recommended limit, see where the request would
rank (based on quality score) and give to the person in whose list it
would rank highest.
The process needs to check that the above step isn't happening to people so many times that it leaves them without any requests...so it
effectively has to check one item at a time.
Does anyone have any ideas about how to approach this? I can think of all kinds of why I could arrange the data to make it easy to identify and see where this needs to happen, but not to automate the process itself. Thanks in advance for any help.
The data (at least the bits needed for this process) looks like the below:
Item ID Person ID Item Score
1 AAG 9
1 AAK 8
2 AAAX 8
2 AN 8
2 AAAK 8
3 Z 8
3 K 8
4 AAC 7
4 AR 5
5 W 10
5 V 9
6 AAAM 7
6 AAAL 7
7 AAAAN 5
7 AAAAO 5
8 AB 9
8 D 9
9 AAAAK 6
9 AAAAC 6
10 A 3
10 AY 3
I am trying to use CART to analyse a data set whose each row is a segment, for example
Segment_ID | Attribute_1 | Attribute_2 | Attribute_3 | Attribute_4 | Target
1 2 3 100 3 0.1
2 0 6 150 5 0.3
3 0 3 200 6 0.56
4 1 4 103 4 0.23
Each segment has a certain population from the base data (irrelevant to my final use).
I want to condense, for example in the above case, the 4 segments into 2 big segments, based on the 4 attributes and on the target variable. I am currently dealing with 15k segments and want only 10 segments with each of the final segment based on target and also having a sensible attribute distribution.
Now, pardon my if I am wrong but CHAID on SPSS (if not using autogrow) will generally split the data into 70:30 ratio where it builds the tree on 70% of the data and tests on the remaining 30%. I can't use this approach since I need all my segments in the data to be included. I essentially want to club these segments into a a few big segments as explained before. My question is whether I can use CART (rpart in R) for the same. There is an explicit option 'subset' in the rpart function in R but I am not sure whether not mentioning it will ensure CART utilizing 100% of my data. I am relatively new to R and hence a very basic question.
I have two groups (data.frame) in R called good and bad which contain good users and bad users respectively.
The group good contains game_id which is the id for a computergame and number which is how many times this game has been played.
For example good$game_id we get 1 2 3 ... 20. We have 20 games.
Similar good$number we get 45214 1254 23 ... 8914 which is the number the game has been played. For example has game_id==1 been played 45214 times in group good.
Similar for bad.
We also have the same number of users in the two groups.
So for head(good,20) we get
game_id number
1 45214
2 1254
...
20 8914
I want to investigate if there is dependence between the number of times a fixed computergame has been played.
For game_id==1 I would try to use Pearson's Chi test for 'Independence'.
In R I type chisq.test(good[1,2], bad[1,2]) to see if there is indepence between good and bad for game_id==1 but I get an error message: x and y must have same levels.
How can this problem be solved ?
I've implemented a simple up/down voting system on a website, and I keep track of individual votes as well as vote time and unique user iD (hashed IP).
My question is not how to calculate the percent or sum of the votes - but more, what is a good algorithm for determining a good score based on votes?
I find sorting by pure vote percent to be unacceptable, as well as simply tallying upvotes.
Consider this example:
Image A: 4 upvotes, 1 downvotes
Image B: 5 upvotes, 4 downvotes
Image C: 1 upvote, 0 downvotes
The ideal system would put A first, maybe followed by B and then C.
In a pure percentage scenario, the order is C > A > B. (wrong)
In a pure vote count scenario, the order is B > A > C. (wrong)
I have an idea for a somewhat "hybrid" algorithm based on the system's confidence in a score, maybe something along the lines of:
// (if totalvotes > 0, else score = 0)
score = 1 - ((downvotes+1 / totalvotes+1) * sqrt(1 / totalvotes))
However, I was hoping to ask the community if there are any really well-defined algorithms already out there that I simply don't know about, before I sit around tweaking my algorithm from now until sunset.
I also have date data for each vote - however, the content of the site isn't very time-sensitive so I don't really care to sort by "what's hot" at all.
Sorting by the average of votes is not very good.
By instead balancing the proportion of positive ratings with the uncertainty of a small number of observations like explained in this article, you achieve a much better representation of your scores.
The article below explains how to not make the same mistake that many popular websites do. (Amazon, urbandictionary etc.)
http://evanmiller.org/how-not-to-sort-by-average-rating.html
Hope this helps!
I know that doesn't answer your question, but I just spent 3 minutes for fun trying to find some formula and... just check it :) A column is upvotes and B is downvotes :)
=(LN((A1+1)/(A1+B1+1))+1)*LN(A1)
5 3 0.956866995
4 1 1.133543015
5 4 0.787295787
1 0 0
6 4 0.981910844
2 8 -0.207447157
6 5 0.826007385
3 3 0.483811507
4 0 1.386294361
5 0 1.609437912
6 1 1.552503332
5 2 1.146431478
100 100 -3.020151034
10 10 0.813671022