It's a simple check to restrict duplicate entries, i can't find a way to do it, thought.
Shema:
{
"languages": {
"unique_id": {
"code": "Fr",
"name": "French"
},
"unique_id": {
"code": "En",
"name": "English"
}
}
}
Security rule that i've tried:
service cloud.firestore {
match /databases/{database}/documents {
match /languages/{language} {
allow write: if !(resource.data.hasAny([request.resource.code]));
}
}
}
For instance :this must not allowed
{
"languages": {
"unique_id": {
"code": "Fr",
"name": "French"
},
"unique_id": {
"code": "En",
"name": "English"
},
"unique_id": {
"code": "Fr",
"name": "German"
}
}
}
There really isn't a good way to do it with the current schema. Below are 2 methods with different trade-offs you could explore.
Invert Data Model
Change your data model to make uniqueness the only option, which removes the need to validate.
{
"languages": {
"Fr": {
"name": "French"
},
"En": {
"name": "English"
}
}
}
Note in this model it isn't possible to add your broken case.
To query for any document with a particular language code, e.g. 'En', you can do:
.where("En.name", ">", "")
Pros:
Non-unique language codes are impossible
Simple to implement
Cons:
You won't be able to do composite indexes on language code or name
Post-validation
Alternatively, you can set up a Cloud Functions to trigger on any writes. This Function can then have code to do the uniqueness enforcement for you. It would then follow some logic you define if it detects an issue, such as either flagging the document as bad or removing & logging the subsequent non-unique entries.
Pros:
You can do composite indexes with languages.unique_id.code and languages.unique_id.name
Cons:
Incorrect data can exist for short periods of time
Harder to give errors back to the client
Post-Update
Rather than allowing clients to update the language codes, require them to write to a subcollection. Have Cloud Functions trigger on update to the sub-collection, then update the master document if it pasts your checks. Optionally you can then delete the document in the subcollection or leave it as an audit trail.
Pros:
You can do composite indexes with languages.unique_id.code and languages.unique_id.name
Document will always be correct
Cons:
Data in the document can be stale for short periods of time
Harder to give errors back to the client
Related
Assume that we have two nodes: "items" and "sales". How can I write a firebase db rule to prevent any item being deleted if it is related in another node. If a user wants to delete ("items\i01") it should not give permission because it is a relation under ("sales\s01\i01")
"items": {
"i01": {
"name": "item1"
},
"i02": {
"name": "item2"
},
}
"sales": {
"s01": {
"itemKey": "i01",
"price": "45"
},
"s02": {
"itemKey": "i02",
"price": "60"
},
...
}
Security rules can check whether data exists at a known path, but cannot perform searches for data across (a branch of) the JSON tree. So in your current data structure, there is no way to prevent the deletion of the item based on it still being referenced.
The typical solution would be to add a data structure that you can check in security rules to see if the item is still referenced anywhere. This would pretty much be an inverse of your current sales node, which tracks the items in a sale. The inverse node would track the sales for any item:
"sales_per_item": {
"i01": {
"s01": true
},
"i02": {
"s02": true
}
}
You will need to make sure that this new structure (sometimes called an inverted index) is updated to say in sync with sales, both in code and in security rules.
With that in place, you can then prevent deletion of an item that still has references with:
{
"rules": {
"items": {
"$itemid": {
".write": "!newData.exists() && !newData.parent().parent().child('sales_per_item').child($itemid).exists()"
}
}
}
}
As an alternative, you can consider moving the deletion logic into a Cloud Function, where you can do the "check for orders with the item" in code, instead of in security rules.
I also recommend reading these:
How to write denormalized data in Firebase
Patterns for security with Firebase: combine rules with Cloud Functions for more flexibility
Patterns for security with Firebase: offload client work to Cloud Functions
I would like to add to add a custom extension to my Schedule resource.
In my app, Schedule have visit motives (reasons). I know there's a list of classified appointments / encounter reasons but I would like to use mine.
I have something like this :
{
"resourceType":"Schedule",
"identifier":"logical_id",
"type":"schedule_speciality",
"actor":{
"practioner_id":"identifier",
"practioner_name":"practioner name"
},
"external_id":{
"extension":[
{
"url":"http://api.test.com/fhir/schedule/external_id",
"valueIdentifier":"external_id"
}
]
},
"visit_motives":{
"extension":[
{
"url":"https://api.test.com/fhir/ValueSet/schedule#visit_motives",
"valueString":"vist_motive1"
},
{
"url":"https://api.test.com/fhir/ValueSet/schedule#visit_motives",
"valueString":"vist_motive2"
},
{
"url":"https://api.test.com/fhir/ValueSet/schedule#visit_motives",
"valueString":"vist_motive3"
}
]
},
"practice_id":{
"extension":[
{
"url":"https://api.test.com/fhir/schedule/practice_id",
"valueIdentifier":"practice_id"
}
]
}
}
I'm not sure about this part :
"visit_motives":{
"extension":[
{
"url":"https://api.test.com/fhir/ValueSet/schedule#visit_motives",
"valueString":"vist_motive1"
},
{
"url":"https://api.test.com/fhir/ValueSet/schedule#visit_motives",
"valueString":"vist_motive2"
},
{
"url":"https://api.test.com/fhir/ValueSet/schedule#visit_motives",
"valueString":"vist_motive3"
}
]
}
Is it correct to add an extension this way ? There are always multiple visit motives for a specific schedule so I have to list them.
I also have seen this kind of things :
"visit_motives": {
"coding": [
{
"system": "https://api.test.com/fhir/ValueSet/schedule#visit_motives",
"code": "visit_motive1"
}
]
}
Which one is the correct one or am I wrong ?
There are several issues here:
It seems odd to capture a "reason" on a schedule. A schedule says when a particular clinician or clinic or other resource is available. E.g. "Dr. Smith takes appointments Mon/Wed/Fri from 1pm-4pm". So if you were to capture a reason on the resource, it would reflect "Why does Dr. Smith have a schedule?" Typically reasons are captured for an individual Appointment. That's the resource that reserves a particular slot for a planned visit. And Appointment already has an element for reason where you're free to use your own codes or just send text.
You have extensions to convey identifiers, but Schedule already has an element for identifiers. Why would you use extensions instead of the standard element? Note that you can use the "system" and/or "type" components to differentiate different kinds of identifiers.
You're sending "identifier", "type", "name", etc. as simple strings - but they're complex data types, so you need to communicate the child elements
actor is of type Reference - that means you need to point to the Practitioner resource. You can't send the properties in-line. (If the Practitioner only exists in the context of the Schedule, you could use the "contained" approach which would use an internal reference, but containment doesn't seem to make sense in this use-case.
The URL for your extension contains ValueSet, which isn't correct - extensions are all structure definitions. Also, there shouldn't be a # symbol in the URL.
Your syntax for extensions is incorrect. You can't introduce new properties in FHIR. The property name for all extensions is just "extension". You differentiate by the URL. So your syntax should be:
{
"resourceType":"Schedule",
"id":"logical_id",
"extension": [
{
"url":"https://api.test.com/fhir/StructureDefinition/schedule-visit_motive",
"valueString":"vist_motive1"
},
{
"url":"https://api.test.com/fhir/StructureDefinition/schedule-visit_motive",
"valueString":"vist_motive2"
},
{
"url":"https://api.test.com/fhir/StructureDefinition/schedule-visit_motives",
"valueString":"vist_motive3"
}
],
"identifier": [
{
"system": http://api.test.com/fhir/NamingSystem/external_id",
"value": "external_id"
}
{
"system": http://api.test.com/fhir/NamingSystem/practice_id",
"value": "practice_id"
}
]
"type": {
"coding": {
"system": "http://somewhere.org/fhir/CodeSystem/specialties",
"code": "schedule_speciality"
},
"text": "Some text description of specialty"
},
"actor":{
"reference": "http://myserver.org/fhir/Practitioner/12345"
"display": "Dr. smith"
}
}
I was just looking in the docs but couldn't find anything.
So my web app has a structure that's similar to the one in this site.
For the sake of simplicity, let's say my app has only questions which are catalogued by tags. As suggested in the docs, we store our data with a flat, non-normalized structure (E.g.
{
"questions": {
...
},
"tags": {
"tag1": {
"name": "Tag1",
"questions": { "0": true, "1": true }
},
"tag2": {
"name": "Tag2",
"questions": { "2": true, "3": true }
}
}
}
), rather than a normalized structure without data replication like:
{
"questions": {
"0": { "title": ..., "tag": ... },
"1": { "title": ..., "tag": ... },
}
}
One of the advantages of using the first structure is that I can search for questions that have a certain tag without downloading all the data of all of the questions first: querying for /tags/tag1/questions, will return all the object with all of the question's keys. Now, I can query for the questions, but how do I do that?
I don't want to make ten requests for every question, it seems a waste of time and performance, but I couldn't find a way to make Firebase filter by multiple keys. It seems I can only give Firebase one input at a time. I think (and I hope) I am missing something here. What is it?
If I really can't do this, how do I search by tags here?
For the past few weeks i've been exploring Firebase and its features to build a web app, but I've kind of ran into a wall when it comes to security rules.
I've build a data structure on Firebase but I'm not sure if it follows best practices (if it doesn't, feel free to suggest anything different about it):
{
"groups" : {
<GROUP_KEY>
"name": "",
"rels": {
"users": {
<RELS_USERS_KEY>
"key":"" (USER_KEY)
},
"notes": {
<RELS_NOTES_KEY>
"key":"" (NOTE_KEY)
}
},
"isPrivate": true
},
"users": {
<USER_KEY>
"email": "",
"rels": {
"friends": {
<RELS_FRIENDS_KEY>
"key":"" (USER_KEY)
}
},
},
"notes": {
<NOTE_KEY>
"title": "",
"description": "",
"rels": {
"files": {
<RELS_FILES_KEY>
"key":"" (FILE_KEY)
}
}
},
"files": {
<FILE_KEY>
"mode": "",
"url": ""
}
}
The application flow is as follows:
The user signs up: a key is created on "users";
Is redirected to "Groups" view, where he should be shown only
groups that have his ID in RELS > USERS, or that has
"isPrivate":"false";
As the user creates a Group, a new group is added with his ID in RELS > USERS;
Entering the Group view, he should only see notes that are in RELS > NOTES for that group.
The rest of the logic follows the same principle, and I believe that if I can get through the first hurdle of understanding the Firebase security rules and applying them to this case, I can get through the rest.
I've tried a couple of rules, but I can't seem to get any feedback at all from the web application, debugging this has been a trial-and-error process, and its not really working.
Could someone help me at least understanding the logic behind it ? I've read all of their tutorials but they all seem very shallow with no deeper examples on complex structures.
Thanks for the help.
EDIT
I've added the debug:true flag to the login (thanks #Kato), but I'm still getting no feedback on the rules. With the rules as below, I still enter the "Groups" view, but get no feedback on the console, and the logged-in user sees groups he shouldn't:
{
"rules": {
"groups": {
".read": "data.child('rels').child('users/' + auth.user).exists()",
".write": "data.child('rels').child('users/' + auth.user).exists()"
}
}
}
As for the rules I've tried, they were countless, but this is the most recent one (still no feedback).
Maybe I'm missing something ?
Thanks again.
Rules cascade. That is, if any rule allows read, then you cannot revoke it later in a nested child. In this way, you can write rules like the following:
"$record": {
// I can write the entire record if I own it
".write": "data.child('owner').val() === auth.uid",
"foo": {
// anybody in my friends list can write to foo, but not anything else in $record
".write": "data.parent().child('friends/'+auth.uid).exists()"
},
"bar": {
// this is superfluous as permissions are only "granted" and never "revoked" by a child
".write": false
}
}
Note how, because I am the owner, I can also write to foo and to bar, even though bar has tried to revoke my read privilege.
So in your case above, your rules declaration lists read: true which allows full read access to the entire repo. Change that to false and you'll see better results.
** UPDATE **
Thanks to Alfred Fuller for pointing out that I need to create a manual index for this query.
Unfortunately, using the JSON API, from a .NET application, there does not appear to be an officially supported way of doing so. In fact, there does not officially appear to be a way to do this at all from an app outside of App Engine, which is strange since the Cloud Datastore API was designed to allow access to the Datastore outside of App Engine.
The closest hack I could find was to POST the index definition using RPC to http://appengine.google.com/api/datastore/index/add. Can someone give me the raw spec for how to do this exactly (i.e. URL parameters, what exactly should the body look like, etc), perhaps using Fiddler to inspect the call made by appcfg.cmd?
** ORIGINAL QUESTION **
According to the docs, "a query can combine equality (EQUAL) filters for different properties, along with one or more inequality filters on a single property".
However, this query fails:
{
"query": {
"kinds": [
{
"name": "CodeProse.Pogo.Tests.TestPerson"
}
],
"filter": {
"compositeFilter": {
"operator": "and",
"filters": [
{
"propertyFilter": {
"operator": "equal",
"property": {
"name": "DepartmentCode"
},
"value": {
"integerValue": "123"
}
}
},
{
"propertyFilter": {
"operator": "greaterThan",
"property": {
"name": "HourlyRate"
},
"value": {
"doubleValue": 50
}
}
},
{
"propertyFilter": {
"operator": "lessThan",
"property": {
"name": "HourlyRate"
},
"value": {
"doubleValue": 100
}
}
}
]
}
}
}
}
with the following response:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "FAILED_PRECONDITION",
"message": "no matching index found.",
"locationType": "header",
"location": "If-Match"
}
],
"code": 412,
"message": "no matching index found."
}
}
The JSON API does not yet support local index generation, but we've documented a process that you can follow to generate the xml definition of the index at https://developers.google.com/datastore/docs/tools/indexconfig#Datastore_Manual_index_configuration
Please give this a shot and let us know if it doesn't work.
This is a temporary solution that we hope to replace with automatic local index generation as soon as we can.
The error "no matching index found." indicates that an index needs to be added for the query to work. See the auto index generation documentation.
In this case you need an index with the properties DepartmentCode and HourlyRate (in that order).
For gcloud-node I fixed it with those 3 links:
https://github.com/GoogleCloudPlatform/gcloud-node/issues/369
https://github.com/GoogleCloudPlatform/gcloud-node/blob/master/system-test/data/index.yaml
and most important link:
https://cloud.google.com/appengine/docs/python/config/indexconfig#Python_About_index_yaml to write your index.yaml file
As explained in the last link, an index is what allows complex queries to run faster by storing the result set of the queries in an index. When you get no matching index found it means that you tried to run a complex query involving order or filter. So to make your query work, you need to create your index on the google datastore indexes by creating a config file manually to define your indexes that represent the query you are trying to run. Here is how you fix:
create an index.yaml file in a folder named for example indexes in your app directory by following the directives for the python conf file: https://cloud.google.com/appengine/docs/python/config/indexconfig#Python_About_index_yaml or get inspiration from the gcloud-node tests in https://github.com/GoogleCloudPlatform/gcloud-node/blob/master/system-test/data/index.yaml
create the indexes from the config file with this command:
gcloud preview datastore create-indexes indexes/index.yaml
see https://cloud.google.com/sdk/gcloud/reference/preview/datastore/create-indexes
wait for the indexes to serve on your developer console in Cloud Datastore/Indexes, the interface should display "serving" once the index is built
once it is serving your query should work
For example for this query:
var q = ds.createQuery('project')
.filter('tags =', category)
.order('-date');
index.yaml looks like:
indexes:
- kind: project
ancestor: no
properties:
- name: tags
- name: date
direction: desc
Try not to order the result. After removing orderby(), it worked for me.