Objectify Web Safe Key Usage - google-cloud-datastore

I am using Objectify to store and retrieve data from App Engine Datastore.
String version of the key is created from the datastore object id.
public String getWebsafeKey() {
return Key.create(UserData.class, id).getString();
}
The websafeKey is used to get the UserData object from the Datastore.
Key<UserData> userDataKey = Key.create(websafeKey);
UserData userData = ofy().load().key(userDataKey).now();
In our Unit testing when the websafeKey is changed a bit, the user data class can still be retrieved.
Passed websafeKey - agxqfmMyaHF1YWxpdHlyEgsSBU1vdmllGICAgJDSioELDC
Actual websafeKey - agxqfmMyaHF1YWxpdHlyEgsSBU1vdmllGICAgJDSioELDA
Is this a known limitation or this can be addressed?

websafeKey's are base64 encoded strings.
Somehow both
agxqfmMyaHF1YWxpdHlyEgsSBU1vdmllGICAgJDSioELDC &
agxqfmMyaHF1YWxpdHlyEgsSBU1vdmllGICAgJDSioELDA decode to jj~c2hqualityrMovie
give it a try https://www.base64decode.org/

Related

Flutter Firebase firestore append data with unique ID

I'm working on the Flutter app where users can save multiple addresses. Previously I used a real-time database and it was easier for me to push data in any child with a unique Id but for some reason, I changed to Firestore and the same thing want to achieve with firestore. So, I generated UUID to create unique ID to append to user_address
This is how I want
and user_address looks like this
And this is how it's getting saved in firestore
So my question Is how I append data with unique id do I have to create a collection inside users field or the above is possible?
Below is my code I tried to set and update even user FieldValue.arrayUnion(userServiceAddress) but not getting the desired result
var uuid = Uuid();
var fireStoreUserRef =
await FirebaseFirestore.instance.collection('users').doc(id);
Map locationMap = {
'latitude': myPosition.latitude,
'longitude': myPosition.longitude,
};
var userServiceAddress = <String, dynamic>{
uuid.v4(): {
'complete_address': completedAddressController.text,
'floor_option': floorController.text,
'how_to_reach': howtoreachController.text,
'location_type': locationTag,
'saved_date': DateTime.now().toString(),
'user_geo_location': locationMap,
'placeId': addressId
}
};
await fireStoreUserRef.update({'user_address': userServiceAddress});
If I use set and update then whole data is replaced with new value it's not appending, so creating a collection is the only solution here and If I create a collection then is there any issue I'll face?
You won't have any issues per se by storing addresses in a separate collection with a one-to-many relationship, but depending on your usage, you may see much higher read/write requests with this approach. This can make exceeding your budget far more likely.
Fortunately, Firestore allows updating fields in nested objects via dot notation. Try this:
var userServiceAddress = {
'complete_address': completedAddressController.text,
'floor_option': floorController.text,
'how_to_reach': howtoreachController.text,
'location_type': locationTag,
'saved_date': DateTime.now().toString(),
'user_geo_location': locationMap,
'placeId': addressId
};
await fireStoreUserRef.update({'user_address.${uuid.v4()}': userServiceAddress});

Saving JSON as a value inside embedded entities in Cloud Datastore

In Google Cloud Store entities - is there a way to store JSON as the VALUE of an embedded entity property?
For example, I would expect something like this:
{
"properties": {
"someObject": {
"objectValue": {"some":"sutome","json":"object"}
}
}
}
objectValue would be the type of the property
Thanks
In Google Cloud Store entities - is there a way to store JSON as the VALUE of an embedded entity property?
Not directly. However, what you can do is stringify the JSON object and store it as a StringValue (just remember to parse the value back into a JSON object once you retrieve it). Note that if the property that stores the string value is indexed the maximum size the string can be is 1500 bytes, if the property is not indexed the max size of the string is 1MB(1,000,000 bytes).

GitKit Client - Uploaded users cannot connect

We have an existing user database with SHA1-encoded passwords. We upload them to the Google Federated Database (through the GitKitClient java lib), but then these uploaded users can't log in The verifyPassword always returns "Incorrect password" ! The call to the uploadUsers looks like gitkitClient.uploadUsers('SHA1', new byte[0], gitkitUsers)
(We must provide an empty byte array as second param (hash key), since we get NPEs if we provide a null value)
The method that creates the GitkitUsers that are in the list is as follows:
private GitkitUser createGitkitUserFromUser(User user) {
GitkitUser gitkitUser = new GitkitUser()
gitkitUser.email = user.email
gitkitUser.localId = getLocalId(user)
gitkitUser.name = user.displayName
gitkitUser.hash = user.password?.bytes
if (user.pictureFileName) {
gitkitUser.photoUrl = user.getPictureUrl()
}
return gitkitUser
}
We see no way to further investigate. Did someone successfully use it ?
Make sure that the hashKey you use in setPassword() is the same one used in uploadUsers().
I am using the php SDK so I can't share code for you, but when I did NOT use the same hashKey for both places, I had the same problem.

Good way to replace invalid characters in firebase keys?

My use case is saving a user's info. When I try to save data to Firebase using the user's email address as a key, Firebase throws the following error:
Error: Invalid key e#e.ee (cannot contain .$[]#)
So, apparently, I cannot index user info by their email. What is the best practice to replace the .?
I've had success changing the . to a - but that won't cut it since some email's have -s in the address.
Currently, I'm using
var cleanEmail = email.replace('.','`');
but there are likely going to be conflicts down the line with this.
In the email address, replace the dot . with a comma ,. This pattern is best practice.
The comma , is not an allowable character in email addresses but it is allowable in a Firebase key. Symmetrically, the dot . is an allowable character in email addresses but it is not allowable in a Firebase key. So direct substitution will solve your problem. You can index email addresses without looping.
You also have another issue.
const cleanEmail = email.replace('.',','); // only replaces first dot
will only replace the first dot . But email addresses can have multiple dots. To replace all the dots, use a regular expression.
const cleanEmail = email.replace(/\./g, ','); // replaces all dots
Or alternatively, you could also use the split() - join() pattern to replace all dots.
const cleanEmail = email.split('.').join(','); // also replaces all dots
We've dealt with this issue many times and while on the surface it seems like using an email as a key is a simple solution, it leads to a lot of other issues: having to clean/parse the email so it can actually be used. What if the email changes?
We have found that changing the format of how the data is stored is a better path. Suppose you just need to store one thing, the user name.
john#somecompany.com: "John Smith"
changing it to
randomly_generated_node_name
email: "john#somecompany.com"
first: "John"
last: "Smith"
The randomly_generated_node_name is a string that Firebase can generate via childByAutoId, or really any type of reference that is not tied directly to the data.
This offers a lot of flexibility: you can now change the persons last name - say if they get married. Or change their email. You could add an 'index' child 0, 1, 2 etc that could be used for sorting. The data can be queried for any child data. All because the randomly_generated_node_name is a static reference to the variable child data within the node.
It also allows you to expand the data in the future without altering the existing data. Add address, favorite food, an index for sorting etc.
Edit: a Firebase query for email in ObjC:
//references all of the users ordered by email
FQuery *allUsers = [myUsersRef queryOrderedByChild:#"email"];
//ref the user with this email
FQuery *thisSpecificUser = [allUsers queryEqualToValue:#“john#somecompany.com”];
//load the user with this email
[thisSpecificUser observeEventType:FEventTypeChildAdded withBlock:^(FDataSnapshot *snapshot) {
//do something with this user
}];
I can think of two major ways to solve this issue:
Encode/Decode function
Because of the limited set of characters allowed in a Firebase key, a solution is to transform the key into an valid format (encode). Then have an inverse function (decode) to transform the encoded key back as the original key.
A general encode/decode function might be transforming the original key into bytes, then converting them to a hexadecimal representation. But the size of the key might be an issue.
Let's say you want to store users using the e-mail as key:
# path: /users/{email} is User;
/users/alice#email.com: {
name: "Alice",
email: "alice#email.com"
}
The example above doesn't work because of the dot in the path. So we use the encode function to transform the key into a valid format. alice#email.com in hexadecimal is 616c69636540656d61696c2e636f6d, then:
# path: /users/{hex(email)} is User;
/users/616c69636540656d61696c2e636f6d: {
name: "Alice",
email: "alice#email.com"
}
Any client can access that resource as long as they share the same hex function.
Edit: Base64 can also be used to encode/decode the key. May be more efficient than hexadecimals, but there are many different implementations. If clients doesn't share the exact same implementation, then they will not work properly.
Specialized functions (ex. that handles e-mails only) can also be used. But be sure to handle all the edge cases.
Encode function with original key stored
Doing one way transformation of the key is a lot easier. So, instead of using a decode function, just store the original key in the database.
A good encode function for this case is the SHA-256 algorithm. It's a common algorithm with implementations in many platforms. And the chances of collisions are very slim.
The previous example with SHA-256 becomes like this:
# path: /users/{sha256(email)} is User;
/users/55bf4952e2308638427d0c28891b31b8cd3a88d1610b81f0a605da25fd9c351a: {
name: "Alice",
email: "alice#email.com"
}
Any client with the original key (the e-mail) can find this entry, because the encode function is known (it is known). And, even if the key gets bigger, the size of the SHA-256 will always be the same, therefore, guaranteed to be a valid Firebase key.
I am using the following code for converting email to hash and then using the hash as key in firebase
public class HashingUtils {
public HashingUtils() {
}
//generate 256 bits hash using SHA-256
public String generateHashkeySHA_256(String email){
String result = null;
try {
MessageDigest digest = MessageDigest.getInstance("SHA-256");
byte[] hash = digest.digest(email.getBytes("UTF-8"));
return byteToHex(hash); // make it printable
}catch(Exception ex) {
ex.printStackTrace();
}
return result;
}
//generate 160bits hash using SHA-1
public String generateHashkeySHA_1(String email){
String result = null;
try {
MessageDigest digest = MessageDigest.getInstance("SHA-1");
byte[] hash = digest.digest(email.getBytes("UTF-8"));
return byteToHex(hash); // make it printable
}catch(Exception ex) {
ex.printStackTrace();
}
return result;
}
public String byteToHex(byte[] bytes) {
Formatter formatter = new Formatter();
for (byte b : bytes) {
formatter.format("%02x", b);
}
String hex = formatter.toString();
return hex;
}
}
code for adding the user to firebase
public void addUser(User user) {
Log.d(TAG, "addUser: ");
DatabaseReference userRef= database.getReference("User");
if(!TextUtils.isEmpty(user.getEmailId())){
String hashEmailId= hashingUtils.generateHashkeySHA_256(user.getEmailId());
Log.d(TAG, "addUser: hashEmailId"+hashEmailId);
userRef.child(hashEmailId).setValue(user);
}
else {
Log.d(TAG,"addUser: empty emailId");
}
}

Different RavenDB collections with documents of same type

In RavenDB I can store objects of type Products and Categories and they will automatically be located in different collections. This is fine.
But what if I have 2 logically completely different types of products but they use the same class? Or instead of 2 I could have a generic number of different types of products. Would it then be possible to tell Raven to split the product documents up in collections, lets say based on a string property available on the Product class?
Thankyou in advance.
EDIT:
I Have created and registered the following StoreListener that changes the collection for the documents to be stored on runtime. This results in the documents correctly being stored in different collections and thus making a nice, logically grouping of the documents.
public class DynamicCollectionDefinerStoreListener : IDocumentStoreListener
{
public bool BeforeStore(string key, object entityInstance, RavenJObject metadata)
{
var entity = entityInstance as EntityData;
if(entity == null)
throw new Exception("Cannot handle object of type " + EntityInstance.GetType());
metadata["Raven-Entity-Name"] = RavenJToken.FromObject(entity.TypeId);
return true;
}
public void AfterStore(string key, object entityInstance, RavenJObject metadata)
{
}
}
However, it seems I have to adjust my queries too in order to be able to get the objects back. My typical query of mine used to look like this:
session => session.Query<EntityData>().Where(e => e.TypeId == typeId)
With the 'typeId' being the name of the new raven collections (and the name of the entity type saved as a seperate field on the EntityData-object too).
How would I go about quering back my objects? I can't find the spot where I can define my collection at runtime prioring to executing my query.
Do I have to execute some raw lucene queries? Or can I maybe implement a query listener?
EDIT:
I found a way of storing, querying and deleting objects using dynamically defined collections, but I'm not sure this is the right way to do it:
Document store listener:
(I use the class defined above)
Method resolving index names:
private string GetIndexName(string typeId)
{
return "dynamic/" + typeId;
}
Store/Query/Delete:
// Storing
session.Store(entity);
// Query
var someResults = session.Query<EntityData>(GetIndexName(entity.TypeId)).Where(e => e.EntityId == entity.EntityId)
var someMoreResults = session.Advanced.LuceneQuery<EntityData>(GetIndexName(entityTypeId)).Where("TypeId:Colors AND Range.Basic.ColorCode:Yellow)
// Deleting
var loadedEntity = session.Query<EntityData>(GetIndexName(entity.TypeId)).Where(e =>
e.EntityId == entity.EntityId).SingleOrDefault();
if (loadedEntity != null)
{
session.Delete<EntityData>(loadedEntity);
}
I have the feeling its getting a little dirty, but is this the way to store/query/delete when specifying the collection names runtime? Or do I trap myself this way?
Stephan,
You can provide the logic for deciding on the collection name using:
store.Conventions.FindTypeTagName
This is handled statically, using the generic type.
If you want to make that decision at runtime, you can provide it using a DocumentStoreListner

Resources