Which package to use for file uploads to amazon S3 ? knox or collectionFS or any other ?
Is collectionFs ready for file uploads to S3? Or knox is good enough?
Note: in any case i don't want to share my key on the client side. Because of security issue.
2 . Also is there a option where the client file stream can be directly conected to a stream to upload on S3. File is not actually present on the server at any time.
I happen to be looking at this too. CollectionF5 allows file to be send to the server and you can create a handler that sends it to s3 (possibly as blob stream and not a storage). https://github.com/CollectionFS/Meteor-cfs-s3
In the end, you need a place to specify the access key for uploading. If you do not put it in the client side, and you wish the file stream to directory upload on s3, how does the stream get the access?
Possibly you could also take a look at this as alternative: https://www.inkfilepicker.com/
Related
I would like to know the best way to upload a file using DDD and CQRS. I would like to save the image in my files, and save the name in database.
PS: I know that DDD is not about many layers project.
This is my example:
Customer
(Id, Name, Email, Picture (only one))
I'm not asking the code to save the image. But where to call the save image method.
In Controller, I have a CustomerViewModel with these fields. After that, I call my Application Layer, with CustomerAppService, then a Command... and so on...
The method of saving images in folder is in my infrastructure layer.
Should I call the save in folder method in Controller? In Application? In CommandHandler?
Based on my experience I solved the issue like:
Create endpoint (controller action) to generate temporary link for uploading file directly to the storage (we used AWS S3 and it provides the ability to create pre-signed url)
Client uploads files by the url
Client sends acknowledge request with metadata to another endpoint (controller action)
You can save the image in the Controller and retain a reference to the saved file, e.g. a path, an ID of a record in a database, an S3 bucket address, etc. That reference is what you would pass in your command and would be saved on the Customer record.
I can upload, e.g. json file in 10kb to Firebase database.
But when I want to upload more, e.g. json file in 30kb or 70kb it shows an error "There was a problem contacting the server. Try uploading your file again":
Please first refer to the status dashboard:
https://status.firebase.google.com/
At time of question - the console was experiencing a service disruption - which is why you can read / write to the DB via your application but can not perform admin tasks via the console.
Note the location is substantially above the RTDB
Im recording calls with Mixmonitor() application, it works fine, but recently i had a request to record each leg of call with addition to mixedfile. I know, that i can record each leg of call with Monitor() and then use external script tp mix it, but the problem is that it is additional loading of server. So I wonder can I do it by means of asterisk? For example by using Monitor and Mixmonitor together?
You can specify the call-legs with MixMonitor's parameters:
MixMonitor(mixed.wav,r(in.wav)t(out.wav))
as stated in the description:
asterisk*CLI> core show application MixMonitor
r(file): Use the specified file to record the *receive* audio feed.
Like with the basic filename argument, if an absolute path isn't given,
it will create the file in the configured monitoring directory.
t(file): Use the specified file to record the *transmit* audio feed.
Like with the basic filename argument, if an absolute path isn't given,
it will create the file in the configured monitoring directory.
I would like to implement the following functionality:
downloading all the files from a specified remote directory to a local directory.
after downloading all the files I need a list file which contains all the downloaded files.
(I only want this list file when all the files were downloaded successfully.)
Point 1:
Let's say we have around 10 files in the remote directory.
I can use an int-sftp:inbound-channel-adapter component to download all the files but 10 poll cycles are needed to download all of them since the inbound component is only able to download 1 file per poll request.
Spring Integration creates 10 File messages one by one.
Questions:
How can I identify the last file (message) received from the FTP server?
I don't want let users access to list file till all the files from the FTP is successfully received.
How can I achive this?
I can write file names into a list file using the int-file:outbound-channel-adapter but users can read temorary information from that file before the download process is finished.
How can I trigger the event that all files which are on the FTP are downloaded?
Thanks for your advices
Ferenc
First of all this isn't correct:
the inbound component is only able to download 1 file per poll request
You can configure it to to download infinitely during the single poll - max-messages-per-poll=-1. Anyway it is a default option on <poller>.
Anyway if it is your case to dowload one file per poll, you can go ahead with that requirements.
Since any Messaging system tries to achieve stateless paradigm, it is normal that one message doesn't know anything about another. And with that they all don't impact each other. The async scenario is the best for Messaging. With that we can process the second message quicker, than the first one.
Your requirement is enough interest and I won't dare to call it strange. Because any business may have place.
Since you are going to process several download files as one group, there will be need to have some marker on the remote server. Or it can be some timeframe, which we can extract from file timestamp. Or there will be need to store on the remote server some marker file to point that a set of files are finished and you can process them from your application using their local version. Would be great, if that marker file can contain a list of file names of that group.
Otherwise we don't have any hook to group messages for those files.
From other side you can consider to use <int-sftp:outbound-gateway> with MGET command: http://docs.spring.io/spring-integration/docs/latest-ga/reference/html/sftp.html#sftp-outbound-gateway
So, here's the scenario. I have a site which allows you to perform certain operations on files, which take on the order of seconds. I don't want the client to have to wait that long before the server returns a response so they way we have it now is that
User performs an operation in their browser (client)
Client sends a POST request to server with parameters
Server adds operation to job queue and sends back the expected url of the result
Client pings server until file is available then serves it
Currently these files are being stored in my ec2 server but I want to move this to S3. I was wondering if this type of flow is possible.
The server knows what the file will be saved and to where way before it actually is, so is that the same case with S3? Is there a way of knowing the file URL if I know all the information beforehand (bucket, filename, etc)?
All S3 object URLs follow patterns, so it's easy to know what the URL will be ahead of time.
If the bucket name is DNS-compliant (required of all regions except for US Standard), then it'll look like this:
<bucket>.s3.amazonaws.com/<object-path>
The U.S. Standard region is a bit more lax in it's bucket name rules (they aren't required to be DNS-compliant), so some may look like this:
s3.amazonaws.com/<bucket>/<object-path>
So, if your bucket name is something DNS-compliant (e.g., example), and your file is abc/123/file.txt, then your object URL will be:
example.s3.amazonaws.com/abc/123/file.txt
So, if your bucket name is NOT DNS-compliant (e.g., EXAMPLE_123), and your file is abc/123/file.txt, then your object URL will be:
s3.amazonaws.com/EXAMPLE_123/abc/123/file.txt
Here's an example of the DNS-compliant logic from the official PHP SDK.
https://github.com/aws/aws-sdk-php/blob/master/src/Aws/S3/S3Client.php#L293-L317