In the CBPeripheralManager documentation, the startAdvertising method receives a dictionary containing the data you would like to advertise. According to the documentation, CBPeripheralManager.startAdvertising only accepts two possible keys in its dictionary argument: CBAdvertisementDataLocalNameKey and CBAdvertisementDataServiceUUIDsKey.
However, in Apple's documentation for Turning an iOS Device into an iBeacon they suggest that you are able to pass the dictionary that is returned from CLBeaconRegion's peripheralData method. The dictionary returned from this method contains 1 key value pair with the key being named "kCBAdvDataAppleBeaconKey" and the value being the proximityUUID, major, minor, and beacon identifier.
A dump of a dictionary returned from the peripheralData method is displayed here:
Dictionary Dump
My Question is: How does the PeripheralManager.startAdvertisement method receive a dictionary with key "kCBAdvDataAppleBeaconKey" and still broadcast without error if the only keys it supports are CBAdvertisementDataLocalNameKey and CBAdvertisementDataServiceUUIDsKey?
The simplest and most likely explanation is that the documentation about CBPeripheralManager.startAdvertising only accepting the two keys is inaccurate. The docs may simply never have been updated after iBeacon support was released.
On a related note, I suspect (but cannot confirm) that the method supports even more keys as private APIs. You'd have to decompile the framework binary to figure out what these are.
The reality is that it is very common for documentation to be out of sync with new features added to APIs, and even more common for documentation not to mention secret behavior that isn't publicly supported.
Related
I was not able to get clarity on the naming convention to follow on google protobuf naming convention documentation. Hence thought to reach out to community to know what should be the method, request & response naming
If i have something that fetches list of student using studentIds, should i name my method
rpc StudentByIdList(StudentByIdListRequest) returns (StudentByIdResponse)
or
rpc ListStudent(StudentIdsRequest) returns (StudentResponse)
or something else
The 2nd one doesn't seem right because in case i have more than 1 method which fetches students by name starting with and student ids.
I went through the google protobuf naming convention documentation and i did find that preposition is not allowed, so By in the method name is wrong, but how do we represent in method that i want to find Student List of Student Id's
I don't know whether there is a documented convention.
The style guide is brief.
If I were implementing your solution, I would 'encode' the difference between the 2+ methods in the request message:
rpc ListStudents(ListStudentsRequest) returns (ListStudentsResponse)
ListStudentsRequest would include a oneof (id,name) or, more commonly and more extensibly a filter field in which constraints could be specified (as a string). This mechanism permits the inclusion of e.g. paging too.
Google has a Google APIs repo that documents its services that support gRPC.
You can review the protos files in the google subdirectory to see how Google solves this problem.
The proto for Google Cloud Storage includes List methods for the various resources in the service.
For ListBucketsRequest you can see a prefix field that's used to filter the results.
The auto id's generated by an Android client in a Firestore collection seem to all meet certain criteria for me:
20 characters of length
Start with a - dash
Seem to cycle through characters based on time?
With the last point I mean that the first characters will look very similar if the creation happened in a similar time frame, e.g. -LZ.., -L_.., and -La... This describes the Flutter implementation.
However, looking at the Javascript implementation of auto id, I would assume that the only common criterion of all clients is the length of 20 characters. Is this assumption correct?
Accross all clients, the auto id has a length of 20 characters:
iOS
Android
JavaScript (Web)
Flutter
You're referring to two types of IDs:
The push IDs as they are generated by the Firebase Realtime Database SDK when you call DatabaseReference.push() (or childByAutoId in iOS). These are described in The 2^120 Ways to Ensure Unique Identifiers, and a JavaScript implementation can be found here.
The auth IDs that are generated by the Cloud Firestore SDK when you call add(..) or doc() (without arguments). The JavaScript implementation of this can indeed be found in the Firestore SDK repo.
The only things these two IDs have in common is that they're designed to ensure enough entropy that realistically they will be globally unique, and that they're both 20 characters long.
I'm trying to create a Cloud Dataflow 2.x (aka Apache Beam) PTransform to filter out elements of a PCollection that were already "seen" previously.
The basic idea is that the PTransform gets a Function to calculate a natural id (a "primary key") for each of the PCollection's elements. Each of these ids will be used to create a Cloud Datastore Key and stored in a dummy Entity Kind ("table"); if the insertion fails due to a duplicate key error, then I know that I already have seen the corresponding natural id and therefore the same PCollection element.
I assume the best way to implements this is by using Bundles in the DoFn and issue batch requests to Datastore in my #FinishBundle method.
I understand that I cannot use transactional commits in this case because if just one Insert mutation fails due to its key already existing, it will make the whole transaction fail, and the documentation says that's impossible to understand which of the Keys is the already existing one.
If I use non-transactional Inserts, am I guaranteed that concurrent inserts of the same Key will have at least one of them fail?
Otherwise, which other options do I have? I'd rather not use transactions with just one mutation, but batch multiple mutations together.
I'm using the datastore-v1-proto-client / datastore-v1-protos API (aka the plain Datastore v1 REST API), not the new google-cloud-datastore API.
I'm currently looking into server-side validation of a GoogleIDToken for Google Sign-in (Android & iOS). Documentation here
In the example, the "sub" field in the object returned by the Google API endpoint is read as a string, but it looks like it may actually be a (really big) number.
Some other tests using some users on my side also show big numbers.
Looking deeper in the Payload documentation, it looks like this value could be null, but outside of this possibility, can we assume that this string is actually a number?
This is important because we want to store it in a database, and saving it as a number might actually be more efficient than a string.
I work on the team at Google: this value should be stored as a string, it may be parseable as a number, but there is no guarantee, do not rely on that assumption!
If you're going to do arithmetic on it, then store it as a number. Otherwise, don't.
This is a general rule.
We have Lotus Notes signed document and user's public key.
What we need to do: enter the key into a field in a special application (it can be Lotus notes database or some special soft). Then we ask this special application: "Is this document really signed by this user with this public key?"
And our app must answer: yes or not.
We try to write this special application and we met few issues:
We have field named $Signature in the document, that is the hash of signed fields, encrypted with private key of the signer. I can see content of this field in document's properties. But I can't to extract it programmatically (I tried LotusScript and Java). And I didn't find any way to do it.
Therefore I just manually copied content of this field and pasted it into a field on a special form to further analyze. But there I met another problem. I don't know how to decrypt this signature. What algorithm Lotus uses to sign hash? If I will know the algorithm I guess I will be able to decrypt it with Java and get hash of signed fields.
And there will be one more problem I believe. I dont know how Lotus counts hash of fields. Does it use md5? I need to know it to be able to compare hashes and say did this user signed the document or not.
So. It's the interest task. But now it's impossible to solve it. There are 3 huge problems on the way. Can anyone help with them?
The answer is: don't try to do this yourself. Not the way you described it. There's an API to validate Notes signatures.
Just copy the document's UNID to your database, and then write code using the Notes C API to open the document and call the API function NSFNoteVerifySignature() to validate it. You can do this from Java using JNI or from LotusScript by following the techniques that are described here, or you can use the LSX toolkit, or just write a standalone C program.
You would have to use the Notes C API anyhow to deal with two of the three points that your raised:
You need the C API to get get at the contents of the $Signature item.
The signature is RSA.
You actually have two problems: the algorithm, and the input. You have to match them both. If I recall correctly, Lotus has described the hash algorithm as "modified MD2". Bear in mind, this goes back well over 20 years, and breaking compatibility is something that they don't like to do. It's possible that they've changed it when they upgraded RSA key sizes, but I don't recall hearing about that. But as I said, that's only half the problem. You need to get the raw input bytes in exactly the same format as the signature algorithm saw them, and for rich text fields this probably means reading raw CD records, which requires the C API.