I'm considering to use Firebase for a project but can't seem to find any informations on server-side data validation.
Lets say i'm making a game and a player deals damage to another player i would like to validate the following:
That the players are actually close to eachother
That the damage points corresponds to the attack given
That the data has not been
tampered from the client to the server
ETC.
Is it possible to validate this kind of stuff /Adding serverside logic directly with Firebase or do i have to make an intermediate-server, basically smashing the whole point in using Firebase in the first place?
Thanks in advance
Jonas
Validating data is definitely possible with Firebase. It is part of its "security" rules, for which the documentation can be found here and here.
A simple example from that last documentation link:
a sample .validate rule definition which only allows dates in the format YYYY-MM-DD between the years 1900-2099, which is checked using a regular expression.
".validate": "newData.isString() &&
newData.val().matches(/^(19|20)[0-9][0-9][-\\/. ](0[1-9]|1[012])[-\\/. ](0[1-9]|[12][0-9]|3[01])$/)"
You can build pretty complicated validation rules. In case you need those, you might want to have a look at Firebase's blaze compiler. It translates a higher-level language into Firebase's relatively low-level rules. The author of the blaze compiler originally wrote it for your second and third use-case and wrote an article about it here.
I hope these are enough to get you started. If you get stuck, just post a question with the rules you tried.
Related
Firebase documentation recommends including code snippet given at (https://firebase.google.com/docs/functions/networking#https_requests) to optimize the networking, but few details are missing. Like
How exactly does this help?
Are we supposed to call the function defined as per of recommendation
or include this snippet deploy?
Any documentation around this would be of great help.
this is an example showing you how you would make this request, the key part in this example is the agent field which by nature isn't normally managed within your app. By injecting a reference to it manually, you are able to micro-manage it and it's events directly.
As for the second question, it really depends on your cloud function needs - some users set it in a global object that they manage with all cloud functions but it's on a by-use case basis. but ultimately isn't required.
You can read more about HTTP.Agent's and their usage below:
https://nodejs.org/api/http.html
https://www.tabnine.com/code/javascript/functions/http/Agent
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/User-Agent
I'm following the docs for Firestore here on Aggregation Queries.
I couldn't help but notice that the cloud function solution wouldn't exactly work since it's not idempotent: numRatings is incremented and avgRating recomputed each time.
Though this example could be made idempotent if there was also a separate document being stored for each new rating: you'd add a check if the user has already submitted a rating for the restaurant.
Is there something I'm missing that makes this example idempotent? Or is the point of the example just to show that this could be done in a cloud function?
Making a function idempotent requires a lot of extra lines of code, which would make the example much harder to understand. You should expect that sample code not to be idempotent, unless it's trying to demonstrate idempotence.
If you have feedback for the authors of the documentation, you are free to give that with the "SEND FEEDBACK" button at the top of each page.
I've read some topics about GraphQL, and one of the great features I like is you can specify fields you want (Client End).
I'm thinking maybe I can also add it into REST API. I look around and find there has already such specification: fetching-sparse-fieldsets
So I'm trying to add such feature in Symfony. (Especially, in FOSRestBundle+JSMSerializer).
But I'm not quite sure whether it is valuable or not. Can someone give you advice?
That is a question you should ask to your API users and your customer. It might be useful for the API user, but it has a lot of downsides:
By building it yourself, you spend a lot of time by developing, testing and maintaining this 'optional' feature. Is the customer willing to pay? (YAGNI)
As mentioned above, you must maintain the code for this feature, you cannot remove it unless you decide to release a new API version. As external packages change, your code might require an update aswell.
It can become difficult when troubleshooting. API users might be trying to retrieve a field that isn't specified in the API URL (ofcourse these issues arise after project transfers to other developers). Questions can come up why some data isn't available, but is present in API documentation.
Especially the first one in the list is incredibly important. Don't build features that the customer does not need. Personally I always return all accessible data available, even null values. The API users decide what to do with that data. Bandwidth isn't such a problem these days I guess.
I am designing a system that uses asp.net webapi to serve data that is used by a number of jquery grid controls. The grids call back for the data after the page has loaded. I have a User table and a Project table. In between these is a Membership table that stores the many to many relationships.
User
userID
Username
Email
Project
projectID
name
code
Membership
membershipID
projectID
userID
My question is what is best way to describe this data and relationships as a webapi?
I have the following routes
GET: user // gets all users
GET: user/{id} // gets a single user
GET: project
GET: project/{id}
I think one way to do it would be to have:
GET: user/{id}/projects // gets all the projects for a given user
GET: project/{id}/users // gets all the users for a given project
I'm not sure what the configuration of the routes and the controllers should look like for this, or even if this is the correct way to do it.
Modern standard for that is a very simple approach called REST Just read carefully and implement it.
Like Ph0en1x said, REST is the new trend for web services. It looks like you're on the right track already with some of your proposed routes. I've been doing some REST design at my job and here are some things to think about:
Be consistent with your routes. You're already doing that, but watch out for when/if another developer starts writing routes. A user wants consistent routes for using your API.
Keep it simple. A major goal should be discoverability. What I mean is that if I'm a regular user of your system, and I know there are users and projects and maybe another entity called "goal" ... I want to guess at /goal and get a list of goals. That makes a user very happy. The less they have to reference the documentation, the better.
Avoid appending a ton of junk to the query string. We suffer from this currently at my job. Once the API gets some traction, users might want more fine grained control. Be careful not to turn the URL into something messy. Something like /user?sort=asc&limit=5&filter=...&projectid=...
Keep the URL nice and simple. Again I love this in a well design API. I can easily remember something like http://api.twitter.com. Something like http://www.mylongdomainnamethatishardtospell.com/api/v1/api/user_entity/user ... is much harder to remember and is frustrating.
Just because a REST API is on the web doesn't mean it's all that different than a normal method in client side only code. I've read arguments that any method should have no more than 3 parameters. This idea is similar to (3). If you find yourself wanting to expand, consider adding more methods/routes, not more parameters.
I know what I want in a REST API these days and that is intuition, discoverability, simplicity and to avoid having to constantly dig through complex documentation.
I've been using this site for quite a while, usually being able to sort out my questions by browsing through the questions and following tags. However, I've recently come across a question that is rather hard to lookup amongst the great number of questions asked - a question I hope some of you might be able to share your opinion on.
As my problem is a bit hard to fit into a single line, going in the title, I'll try to give a bit more details on the problem I've encountered. So, as the title says I need to filter, or limit, some of the response data my standard ASP.NET Soap-based Web service returns on invoking various web methods. The web service is used to return data used by other systems (a data repository more or less), where the client today is able to specify a few parameters on how the data should be filtered and in return a full-set of data back.
Well, easy enough I thought, just put additional filtering options on the existing web methods which needs a bit more filtered applied, make adjustments on the server-side and we are all set to go - well, unfortunately it turned out to be a bit more tricky then this.
The problem I am facing is that I'm working on a web service running in a production environment, which needs to be extended in such that additional filters can be applied to existing web method being invoked w/o affecting the calls already being made by other systems used by the customer using their client stubs. This is where I am a bit troubled, since I can't seem to find a "right solution" on extending the current web service.
Today, the filter is send as a custom data structure which holds information on which data should filtered, but I am not sure if I can simply just add more information to this data structure w/o breaking code at the clients? One of my co-workers suggested that I could implement a solution where I would extend the web.config on the server-side to hold a section with details on which data should be excluded (filtered out), but I don't find this to be a viable solution long-sighted - and I don't trust customers with such an option since this is likely to go wrong at some point. So the solution I am looking for is a way that I can apply a "second filter" to the data I am requesting from the client so instead of getting a full-set of data back it should only give a fraction, it implemented in such that the filter can be easily modified and it must not affect the current client calls.
Any suggestions on how I should approach this problem?
Thanks!
Kind regards,
E.
A pretty common practice is to create another instance of the application OR use part of the url to signify the version of the endpoint they are connecting to, perhaps the virtual directory is the date. That way old calls will go to the old API and new calls will come in on the new API.
http://api.example.com/dostuff
vs
http://api.example.com/6-7-2011/dostuff