According to the documentation corda attachments working with attachments is a very simple task. But I could not find answers about next points:
Where does attachment store? Is it a database, file system?
Does corda have any limitation with attachment size "10-100mb"
Any limits with attachment amount?
Thanks.
Attachments are stored in the node's database, in the NODE_ATTACHMENTS table.
There is no limit on the number or size of attachments in a transaction. However, each compatibility zone has a set of network parameters, one of which is maxTransactionSize. This specifies the maximum allowed size in bytes of a transaction, including its attachments.
Related
I am struggling to find out how to set the limit of the storage that each user can upload to my apps storage.
I found method online Storage.storageLimitInBytes method, but I don't see this method even be mentioned in Firebase docs, let alone instructions on how to set it.
In general, how do startups monitor how many times user upload images, would they have a field in users document such as amountOfImagesUploaded: and everytime user uploads image I would increment that count and this way I could see who abuse the storage that way.
Or would I have to similar document that tracks users uploads per day and when the count reaches 100 or something then take action on that user.
I would really appreciate your help regarding this issue that I am facing.
Limits in Cloud Storage for Firebase security rules apply to each file/object separately, they don't apply to an entire operation.
You can limit what a user can upload through Firebase Storage's security rules. For example, this (from the linked docs) is a way to limit the size of uploaded files:
service firebase.storage {
match /b/<your-firebase-storage-bucket>/o {
match /images/{imageId} { // Only allow uploads of any image file that's less than 5MB
allow write: if request.resource.size < 5 * 1024 * 1024 && request.resource.contentType.matches('image/.*');
} } }
But there is currently no way in these rules to limit the number of files a user can upload.
Some options to consider:
If you hardcode the names of the files that the user uploads (which
also implies you'll limit the number of files they can upload), and
create a folder for the files for each specific user, you can
determine the sum of all files in a user's folder, and thus limit on
the sum in that way.
For example : If you fix file names and limit the allowed file
names to be numbered 1..5, the user can only ever have five files in
storage:
match /public/{userId}/{imageId} {
allow write: if imageId.matches("[1-5]\.txt");
}
Alternatively, you can ZIP all files together on the client, and
then upload the resulting archive. In that case, the security rules
can enforce the maximum size of that file.
And of course you can include client-side JavaScript code to check
the maximum size of the combined files in both of these cases. A
malicious user can bypass this JavaScript easily, but most users
aren't malicious and will thank you for saving their bandwidth by
preventing the upload that will be rejected anyway.
You can also use a HTTPS Cloud Function as your upload target, and
then only pass the files onto Cloud Storage if they meet your
requirements. Alternatively you can use a Cloud Function that
triggers upon the upload from the user, and validates the files for
that user after the change. For example : You would have to
upload the files through a Cloud function/server and keep track of
the total size that a user has uploaded. For that,
Upload image to your server
Check the size and add it to total size stored in a database
If the user has exceeded 150 GB, return quota exceeded error else upload to Firebase storage user -> server -> Firebase storage
An easier alternative would be to use Cloud Storage Triggers which
will trigger a Cloud function every time a new file is uploaded. You
can check the object size using the metadata and keep adding it in
the database. In this case, you can store total storage used by a
user in custom claims in bytes.
exports.updateTotalUsage = functions.storage.object().onFinalize(async (object) => {
// check total storage currently used
// add size of new object to it
// update custom claim "size" (total storage in bytes)
})
Then you can write a security rule that checks sum of size of new
object and total storage being used does not exceed 150 GB: allow
write: if request.resource.size + request.auth.token.size < 150 *
1024 * 1024
You can also have a look at this thread too if you need a per user
storage validation. The solution is a little bit tricky, but can be
done with :
https://medium.com/#felipepastoree/per-user-storage-limit-validation-with-firebase-19ab3341492d
Google Cloud (or Firebase environment) doesn't know the users. It knows your application and your application do.
if you want to have statistic per users you have to logs those data somewhere and perform sum/aggregations to have your metrics.
A usual way is to use Firestore to store those information and to increment the number of file or the total space used.
An unusual solution is to log each action in Cloud Logging and to perform a sink from Cloud Logging to BigQuery to find your metrics in BigQuery and perform aggregation directly from there (the latency is higher, all depends on what you want to achieve, sync or async check of those metrics)
I'm setting WSO2 APIM HA in distributed environment and I have some challanges using this documentation.
Documentation states: Note: When configuring clustering, ignore the WSO2_CARBON_DB data source configuration.
Question is, do I really cannot use CARBON db instead od UM un REG databases in HA?
Documentation mentions to configure following:
AM DB - in the Publisher, Store, and Key Manager nodes
UM DB - in the Publisher, Store, and Key Manager nodes
REG DB - in the API Publisher and Store nodes. (single tenant)
MB DB - in the Traffic manager nodes (each TM own DB)
Question is, can I completely fill one master-datasources.xml file and overwrite it on all components so I would not have to edit it on each server? (only editing the second TM datasource to aim to the second MB DB)
Yes, that is fine if you completely fill only one master-datasource.xml file & overwrite it on all other components. (except WSO2_MB_STORE_DB which is MB DB)
But MB DB (WSO2_MB_STORE_DB ) has to be separate for each node. As this DB is used for traffic as well as internally by Throttling policies, which has very high rate of DB transactions.
It will work if you don't keep WSO2_MB_STORE_DB separate, but it will have large number of DB transactions which can slower down your single DB. So it's Highly Advisable to maintain separate DB on each node. It will also help you in easy DEBUGGING in PROD environments.
PFB Following questions:
1) In my local, I don't have a networkMap so the maxTransactionSize and maxMessageSize needs to be made part of the extraConfig in deployNodes for each node?
2) Let's say I have an Excel of 100MB which I Zip and then upload to Node using rpc.uploadAttachment the SecureHash received will now be added to a Tx. After Successful Completion of the TX will both parties have the attachment? or The Receiver will get the file only when he opens the attachment?
3) If it's when the receiver opens the attachment, it's requested from the sender, the file travels over the network and reaches to the receiver and is stored in the H2 DB for future use. If attachment is required later, the blob can directly be provided from the DB?
4) Now where, how does attachmentContentCacheSizeMegaBytes come into picture? Since we are already storing it in the H2 DB where is it used? as a blob limit to the node_attachment table?
5) Also, is the file ever stored in the file system ? at the time of upload to the node? does it get stored directly to the H2 DB?
The maxTransactionSize and maxMessageSize is set by the network operator, and individual nodes cannot modify it. This is for compatibility reasons. All the nodes on the network need to be able to handle the largest-possible transaction to ensure they can resolve any transactions they receive
The receiver node downloads the attachment immediately, and not when it first opens the attachment
N/A
The attachmentContentCacheSizeMegaBytes node configuration option is optional and specifies how much memory should be used to cache attachment contents in memory. It defaults to 10MB
The attachment is stored in the node's database as a blob when it is first uploaded
we have a scenario where we need to populate the collection every one hour with the latest data whenever we receive the data file in blob from external sources and at the same time , we do not want to impact the live users while updating the collection.
So, we have done below
Created 2 databases and collection 1 in both databases
Created a another collection in different database( configuration database ) with property as Active and Passive and this will have the Database1 and Database2 as values for the above properties
Now , our web job will run every time it sees the file in blob and check this configuration database and identify which one is active or passive and process the xml file and update the collection in passive database as that is not used by the live feed and once it is done , will update the active database to current and passive to live
now , our service will always check which one is active and passive and fetch the data accordingly and show to user
As we have to delete the data and insert the newly data in web job , wanted to know is this is best design we have come up with ? Does deleting and inserting the data will cost ? Is there better way to do bulks delete and insert as we are doing sequentially now
wanted to know is this is best design we have come up with ?
As David Makogon said, as for your solution, you need to manage and pay for multiple databases. If possible, you could create new documents in same collection and control which document is active in your program logic.
Does deleting and inserting the data will cost ?
the operation/request will consume the request units, which will be charged. To know Request Units and DocumentDB Pricing details, please refer to:
What is a Request Unit
DocumentDB pricing details
Is there better way to do bulks delete and insert as we are doing sequentially now
Stored Procedure that provides a way to group operations like inserts and submit them in bulk. You could create the stored procedures and then execute the stored procedure in your Webjobs function.
I am using sqlite in my application only for read access. The DB gets hit often by my application and I could see that the header(100 bytes) of the database is read every time when i access the database.
Precisely speaking, 16 bytes from the 24th byte of the header is read everytime. My question is , if the database is used only for read purpose, why the header is read everytime as the database connection is not closed?..can we make it read it only once?
Thanks!!
Google search gave me this link, and it says
"Your process may promise that it will only read the database, but there
might be some other process writing to it.
Not being a server, sqlite has no other way to find that out than by
reading the header over and over again. It has to check whether the
schema was changed, or whatever other info is in those bytes."
http://www.mail-archive.com/sqlite-users#sqlite.org/msg69900.html