I would like to know the best way to upload a file using DDD and CQRS. I would like to save the image in my files, and save the name in database.
PS: I know that DDD is not about many layers project.
This is my example:
Customer
(Id, Name, Email, Picture (only one))
I'm not asking the code to save the image. But where to call the save image method.
In Controller, I have a CustomerViewModel with these fields. After that, I call my Application Layer, with CustomerAppService, then a Command... and so on...
The method of saving images in folder is in my infrastructure layer.
Should I call the save in folder method in Controller? In Application? In CommandHandler?
Based on my experience I solved the issue like:
Create endpoint (controller action) to generate temporary link for uploading file directly to the storage (we used AWS S3 and it provides the ability to create pre-signed url)
Client uploads files by the url
Client sends acknowledge request with metadata to another endpoint (controller action)
You can save the image in the Controller and retain a reference to the saved file, e.g. a path, an ID of a record in a database, an S3 bucket address, etc. That reference is what you would pass in your command and would be saved on the Customer record.
Related
We need to configure the IdentityProvider from metadata stored in a database. It would seem though that the only way to specify the metadata to IdentityProvider is through metadataLocation property which supports a URL or file path.
Is there anyway, which I've missed, to pass a stream object that holds the metadata to the IdentityProvider?
Thanks
I'm not aware of any way using the standard code. The Load method that takes a stream is marked as internal, see here:
https://github.com/KentorIT/authservices/blob/master/Kentor.AuthServices/Metadata/MetadataLoader.cs
You could:
Write your database value to a temporary location and give this file path to load
Write an api route that serves up the metadata for a given Idp as a url
Make an open source contribution to add support for this
Don't use MetadataLocation but instead construct the IdentityProvider object and separately set signing key, entity id, binding etc.
etc.
I noticed that to use Firebase Storage (Google Cloud Storage) I need to come up with a unique file name to upload a file.
I then plan to keep a copy of that Storage file location (https URL or gs URL) in the Firebase Realtime database, where the clients will be able to read and download it separately
However I am unable to come up with unique filenames for the files located on Firebase Storage. Using a UUID generator might cause collisions in my case since several clients are uploading images to a single Firebase root
Here's my plan. I'd like to know if it will work
Lets call my firebase root : Chatrooms, which consists of keys : chatroom_1, chatroom_2 ...chatroom_n
under chatroom_k I have a root called "Content", which stores Push keys that are uniquely generated by Firebase to store content. Each push key represents a content, but the actual content is stored in Firebase Storage and a key called URL references the URL of the actual content. Can the filename for this content on Firebase storage have the same randomized Push key as long as the bucket hierarchy represents chatroom_k?
I am not sure if storage provides push() function but a suggestion would be the following:
Request a push() to a random location to your firebase database and use this key for a name.
At any case you will probably need to store this name to the database too.
In my application I have a node called "photos" and there I store the information about the images I upload. I first do a push() to get a new key and I use this key to rename the uploaded image to.
Is this what you need or I misunderstood something?
So I had the same problem and I reached this solution:
I named the files with time and date and the user uid, so it is almost impossible to have two files with the same name and they will be different every single time.
DateFormat dtForm = new SimpleDateFormat("yyyy.MM.dd.HH.mm.ss.");
String date = dtForm.format(Calendar.getInstance().getTime());
String fileName = date + FirebaseAuth.getInstance().getCurrentUser().getUid();
FirebaseStorage
.getInstance()
.getReference("Folder/SubFolder/" + fileName)
.putFile(yourData)
With this the name of the files are going to be like this "2022.09.12.11.50.59.WEFfwds2234SA11" for example
I need to integrate my application with Drupal Web Protal. I need to fetch the data entered by the user for processing in my application. Can anyone suggest a way for that ?
Thanks in advance....
I assume you are using Drupal 7.
To export data from Drupal in CSV / JSON / XML /... : Views Datasource allows you to setup a view (create a page, assign a URL) which displays whatever you want (content, taxonomy, users...). Then you can call its URL and fetch your JSON / CSV / XML... file and process it in your app. You can also write a small PHP script which calls this URL, write the response in a file, send it on a FTP...Views Data Export can help too (handle large data). If you need to fetch nodes individually Content as JSON can be handy (it provides URLs for each node which return node fields in JSON).
To import data in Drupal : Feeds.
Good luck
So, here's the scenario. I have a site which allows you to perform certain operations on files, which take on the order of seconds. I don't want the client to have to wait that long before the server returns a response so they way we have it now is that
User performs an operation in their browser (client)
Client sends a POST request to server with parameters
Server adds operation to job queue and sends back the expected url of the result
Client pings server until file is available then serves it
Currently these files are being stored in my ec2 server but I want to move this to S3. I was wondering if this type of flow is possible.
The server knows what the file will be saved and to where way before it actually is, so is that the same case with S3? Is there a way of knowing the file URL if I know all the information beforehand (bucket, filename, etc)?
All S3 object URLs follow patterns, so it's easy to know what the URL will be ahead of time.
If the bucket name is DNS-compliant (required of all regions except for US Standard), then it'll look like this:
<bucket>.s3.amazonaws.com/<object-path>
The U.S. Standard region is a bit more lax in it's bucket name rules (they aren't required to be DNS-compliant), so some may look like this:
s3.amazonaws.com/<bucket>/<object-path>
So, if your bucket name is something DNS-compliant (e.g., example), and your file is abc/123/file.txt, then your object URL will be:
example.s3.amazonaws.com/abc/123/file.txt
So, if your bucket name is NOT DNS-compliant (e.g., EXAMPLE_123), and your file is abc/123/file.txt, then your object URL will be:
s3.amazonaws.com/EXAMPLE_123/abc/123/file.txt
Here's an example of the DNS-compliant logic from the official PHP SDK.
https://github.com/aws/aws-sdk-php/blob/master/src/Aws/S3/S3Client.php#L293-L317
Which package to use for file uploads to amazon S3 ? knox or collectionFS or any other ?
Is collectionFs ready for file uploads to S3? Or knox is good enough?
Note: in any case i don't want to share my key on the client side. Because of security issue.
2 . Also is there a option where the client file stream can be directly conected to a stream to upload on S3. File is not actually present on the server at any time.
I happen to be looking at this too. CollectionF5 allows file to be send to the server and you can create a handler that sends it to s3 (possibly as blob stream and not a storage). https://github.com/CollectionFS/Meteor-cfs-s3
In the end, you need a place to specify the access key for uploading. If you do not put it in the client side, and you wish the file stream to directory upload on s3, how does the stream get the access?
Possibly you could also take a look at this as alternative: https://www.inkfilepicker.com/