Avoid limited number of avoid areas in Routing API - here-api

i have the case that the i exceed the maximum limit of avoid areas by using avoid areas in Routing API.
There is a similar Question here:
Maximum number of avoid areas exceeds limit using avoid areas
But i am not able to aks further questions. The answer says that it is an API-Limit.
But my question is, if there is an possilibity to avoid this limit?
Thanks,
RAS

As per the documentation this is what you can pass in the parameter and there is No other way to avoid this limit as this is there for various reasons like performance and functional reasons.
Note: You need to specify multiple avoid areas and use the "!"
character as a separator. You can specify up to 20 areas to avoid.

Related

Maximum size of a ruleset – Firestore

what should someone do if the maximum size of a Firestore ruleset was reached (64kb)? I don't have complex rules but rather lots of variables to check. Mostly I check for type and value. Guess they added up quickly :( What is the preferred course of action here? Should someone not be thorough and not check for variables types, for example, in order to not exceed the limit? I was trying to be as thorough as possible and then I ran into the size limitation. Is there any way to circumvent the limit or an advice on what to check for and what not?
There are a few things you can do to reduce the total size, depending on what your rules actually do:
Use functions to share redundant code across rules
Use recursive wildcards to apply common rules to documents in nested subcollections
Shorten the names of any wildcards
Write code to compact rules before uploading by eliminating unnecessary leading whitespace
It's not advisable to eliminate rules entirely. If you do need to do that for whatever reason, you will need to make a judgement call. We can't tell you which rules are more important than others for your particular use cases.
I also suggest filing a feature request with Firebase support to indicate what you need from the system.

Firestore rules limit: max file size of 64KB reached for firestore.rules file. Now what?

I recently reached this limit. The error that the cli gives is very unobvious simply stating Request contains an invalid argument and it took me quite a while to realize I had reached the maximum limit of 64KB for the firestore.rules file. Would be great if that error was a bit more obvious as it would have saved me a bunch of time.
The limits are documented here.
https://firebase.google.com/docs/firestore/security/rules-structure#security_rule_limits
After doing a bunch of searching around solutions to the 64KB limit and not finding anything I contacted support. Their guidance was somewhat helpful but in other cases a bit shocking.
I'm adding these details here for anyone else that is struggling with this issue as not much else comes up when searching on Google.
Here was their response
The given limit on the ruleset size is fixed, and it cannot be increased. However, we do offer ways to reduce the size of your ruleset, or 'lines of code' in particular.
Here are some suggestions:
1) You can define your custom functions, which can be reusable throughout the ruleset
- This will definitely save you a lot of data space, and makes it look organized, as overused conditions can be called in one place
2) If possible, reconsider the database structure by making it efficient
- This means the less the number of collections and subcollections, the less rules are written, which makes the ruleset smaller in size
- Refactor your database structure and security rules as much as possible by removing unnecessary or redundant parts
3) Minimize the use of data validation rules, and put it on application-level instead
- Not only it can reduce lines of code, but it can reduce the number of expressions evaluated to avoid reaching the given limit of 1000 expressions per request
- As much as possible, use your app logic to ensure your data is on the right character length, correct data type, meets the patter criteria, etc. You may also use your app's advanced UI elements like a password textbox, a textbox that you can limit the character length, among others
Some of these suggestions such as using functions are definitely helpful. However, i'm a bit surprised by the suggestions to restructure my database and to put validation on the application level.
In my case i've already used functions quite a lot in my database rules and have removed most of the redundancy.
Asking me to reconsider the database structure and having to overhaul my application for the sake of reducing my rules file size is a huge ask for little gain.
Minimize the use of data validation rules seems completely against the design of the Firestore database. The database is designed in such a way that you can directly connect your client to your database which removes the middle application logic. Suggesting to do this seems to go directly against the architectural nature of Firebase. I would prefer to keep building my application in this way (and want to maintain my security), so this seems like a non-option.
Does anyone else have any suggestions on how to handle this issue?
Technically, this problem does not seem too dissimilar from trying to keep javascript file size down on the web. A lot of the same approaches taken by javascript minifiers could be used to reduce the size of the firebase rules file. It would be great if Firebase provided a tool like this for us.
Out of desperation I made a simple minifier that removes whitespace and comments. In case anyone else finds themselves in a similar situation and needs a quick fix. https://github.com/brianneisler/firemin

Getting large number of entities from datastore

By this question, I am able to store large number (>50k) of entities in datastore. Now I want to access all of it in my application. I have to perform mathematical operations on it. It always time out. One way is to use TaskQueue again but it will be asynchronous job. I need a way to access these 50k+ entities in my application and process them without getting time out.
Part of the accepted answer to your original question may still apply, for example a manually scaled instance with 24h deadline. Or a VM instance. For a price, of course.
Some speedup may be achieved by using memcache.
Side note: depending on the size of your entities you may need to keep an eye on the instance memory usage as well.
Another possibility would be to switch to a faster instance class (and with more memory as well, but also with extra costs).
But all such improvements might still not be enough. The best approach would still be to give your entity data processing algorithm a deeper thought - to make it scalable.
I'm having a hard time imagining a computation so monolithic that can't be broken into smaller pieces which wouldn't need all the data at once. I'm almost certain there has to be some way of using some partial computations, maybe with storing some partial results so that you can split the problem and allow it to be handled in smaller pieces in multiple requests.
As an extreme (academic) example think about CPUs doing pretty much any super-complex computation fundamentally with just sequences of simple, short operations on a small set of registers - it's all about how to orchestrate them.
Here's a nice article describing a drastic reduction of the overall duration of a computation (no clue if it's anything like yours) by using a nice approach (also interesting because it's using the GAE Pipeline API).
If you post your code you might get some more specific advice.

Convention for combining GET parameters with AND?

I'm designing an API and I want to allow my users to combine a GET parameter with AND operators. What's the best way to do this?
Specifically I have a group_by parameter that gets passed to a Mongo backend. I want to allow users to group by multiple variables.
I can think of two ways:
?group_by=alpha&group_by=beta
or:
?group_by=alpha,beta
Is either one to be preferred? I've consulted a few API design references but no-one seems to have a view on this.
There is no strict preference. The advantage to the first approach is that many frameworks will turn group_by into an array or similar structure for you, whereas in the second approach you need to parse out the values yourself. The second approach is also less verbose, which may be relevant if your query string is particularly large.
You may also want to test with the first approach that the query strings always come into your framework in the order the client sent them. Some frameworks have a bug where that doesn't happen.

Should I use an expression parser in my Math game?

I'm writing some children's Math Education software for a class.
I'm going to try and present problems to students of varying skill level with randomly generated math problems of different types in fun ways.
One of the frustrations of using computer based math software is its rigidity. If anyone has taken an online Math class, you'll know all about the frustration of taking an online quiz and having your correct answer thrown out because your problem isn't exactly formatted in their form or some weird spacing issue.
So, originally I thought, "I know! I'll use an expression parser on the answer box so I'll be able to evaluate anything they enter and even if it isn't in the same form I'll be able to check if it is the same answer." So I fire up my IDE and start implementing the Shunting Yard Algorithm.
This would solve the problem of it not taking fractions in the smallest form and other issues.
However, It then hit me that a tricky student would simply be able to enter most of the problems into the answer box and my expression parser would dutifully parse and evaluate it to the correct answer!
So, should I not be using an expression parser in this instance? Do I really have to generate a single form of the answer and do a string comparison?
One possible solution is to note how many steps your expression evaluator takes to evaluate the problem's original expression, and to compare this to the optimal answer. If there's too much difference, then the problem hasn't been reduced enough and you can suggest that the student keep going.
Don't be surprised if students come up with better answers than your own definition of "optimal", though! I was a TA/grader for several classes, and the brightest students routinely had answers on their problem sets that were superior to the ones provided by the professor.
For simple problems where you're looking for an exact answer, then removing whitespace and doing a string compare is reasonable.
For more advanced problems, you might do the Shunting Yard Algorithm (or similar) but perhaps parametrize it so you could turn on/off reductions to guard against the tricky student. You'll notice that "simple" answers can still use the parser, but you would disable all reductions.
For example, on a division question, you'd disable the "/" reduction.
This is a great question.
If you are writing an expression system and an evaluation/transformation/equivalence engine (isn't there one available somewhere? I am almost 100% sure that there is an open source one somewhere), then it's more of an education/algebra problem: is the student's answer algebraically closer to the original expression or to the expected expression.
I'm not sure how to answer that, but just an idea (not necessarily practical): perhaps your evaluation engine can count transformation steps to equivalence. If the answer takes less steps to the expected than it did to the original, it might be ok. If it's too close to the original, it's not.
You could use an expression parser, but apply restrictions on the complexity of the expressions permitted in the answer.
For example, if the goal is to reduce (4/5)*(1/2) and you want to allow either (2/5) or (4/10), then you could restrict the set of allowable answers to expressions whose trees take the form (x/y) and which also evaluate to the correct number. Perhaps you would also allow "0.4", i.e. expressions of the form (x) which evaluate to the correct number.
This is exactly what you would (implicitly) be doing if you graded the problem manually -- you would be looking for an answer that is correct but which also falls into an acceptable class.
The usual way of doing this in mathematics assessment software is to allow the question setter to specify expressions/strings that are not allowed in a correct answer.
If you happen to be interested in existing software, there's the open-source Stack http://www.stack.bham.ac.uk/ (or various commercial options such as MapleTA). I suspect most of the problems that you'll come across have also been encountered by Stack so even if you don't want to use it, it might be educational to look at how it approaches things.

Resources