I've been tinkering with ASP.NET and Blazor to upload files to a server.
<InputFile
OnChange="OnInputFileChange"
multiple
accept="#GetAcceptedFileTypes()"
/>
The multiple property allows me to select more than one file type. I figured out that you can limit the number of files that the app will accept by setting the argument in InputFileChangeEventArgs.GetMultipleFiles() in the #code block; exceeding that limit will throw an exception.
It's cool that you can set that hard limit, but is there a way to prevent the user from accidentally selecting more files that they're supposed to? For example, the accept property makes it so that the upload window only shows the file types that you specify (unless they switch it to All Files (*.*)) so that they're not trying to upload invalid file types. Is there a similar functionality for the count?
I don't know what all of the available properties for InputFile are. I was hoping there'd be a list of them, like that documentation I linked. For C#, it's easy to find classes, methods, and/or parameters available. My Googling has come up short.
Related
Hi and thanks in advance. I want to delete a folder from Google Cloud Storage, including all the versions of all the objects inside. That's easy when you use gsutil from your laptop (you can just use the folder name as prefix and put the flag to delete all versions/generations of each object)
..but I want it in a script that is triggered periodically (for example when I'm on holidays). My current ideas are Apps Script and Google Cloud Functions (or firebase functions). The problem is that in these cases I don't have an interface as powerful as gsutil, I have to use REST API, so I cannot say something like "delete everything with this prefix" and neither "all the versions of this object". Thus the best I can do is
a) List all the object given a prefix. So for prefix "myFolder" I receive:
myFolder/obj1 - generation 10
myFolder/obj1 - generation 15
myFolder/obj2 - generation 12
... and so on for hundreds of files and at least 1 generation/version per file.
b) For each file-generation delete it giving the complete object name plus its generation.
As you can see that seems a lot of work. Do you know a better alternative?
Listing the objects you want to delete and deleting them is the only way to achieve what you want.
The only alternative is to use Lifecycle which can delete objects for you automatically based on conditions, if the conditions satisfy your requirements.
I have Drupal 7.x and Advanced CSS/JS Aggregation 7.x-2.7
In folders advagg_js and advagg_css (path is sites/default/files) i have
I have too many identical files and i don't understand why...
This is a name of file in advagg_css :
css____tQ6DKNpjnnLOLLOo1chze6a0EuAzr40c2JW8LEnlk__CmbidT93019ZJXjBPnKuAOSV78GHKPC3vgAjyUWRvNg__U78DXVtmNgrsprQhJ0bcjElTm2p5INlkJg6oQm4a72o
How can I delete all these files without doing damage?
Maybe in performance/advagg/operations in box Cron Maintenance Tasks i must check
Clear All Stale Files
Remove all stale files. Scan all files in the advagg_css/js directories and remove the ones that have not been accessed in the last 30 days.
????
I hope you can help me...
Thanks a lot
I can guarantee that there are very few duplicate files in those directories. If you really want, you can manually delete every file in there; a lot of them will be generated again so you're back to having a lot of files (the css/js files get auto created on demand, just like image styles). AdvAgg is very good at preventing a 404 from happening when requesting an aggregated css/js file. You can adjust how old a file needs to be in order for it to be considered "stale". Inside of the core drupal_delete_file_if_stale() function is the drupal_stale_file_threshold variable. Changing this inside of your settings.php file to something like 2 days $conf['drupal_stale_file_threshold'] = 172800; will make Drupal more aggressive in terms of removing aggregated css and js files.
Long term if you want to reduce the number of different css/js files from being created you'll need to reduce the number of combinations/variations that are possible with your css and js assets. On the "admin/config/development/performance/advagg/bundler" page under raw grouping info it will tell you how many different groupings are currently possible, take that number and multiply it by the number of bundles (usually 2-6 if following a guide like this https://www.drupal.org/node/2493801 or 6-12 if using the default settings) and that's the number of files that can currently be generated. Multiply it by 2 for gzip. On one of our sites that gives us over 4k files.
In terms of file names the first base64 group is the file name, second base64 group are the file contents, and the third base64 group are the advagg settings. This allows for the aggregates contents to be recreated by just knowing the filename as all this additional info is stored in the database.
I have 3 Global resource files:
WebResources.resx
WebResources.resx.es
WebResources.resx.it
When making changes to my application, I always add the default global resource records (English) to the WebResources.resx file. However, I don't always have the Spanish and Italian versions at the time, so these need to be added at a later stage.
My project manager has suggested that whenever adding a record into the WebResources.resx file, then a blank record should be added into the .es and .it versions. Then when it comes to finding out which records need a translation, we can order by the value and see a list of the blanks.
I like the fall-back of using Global Resources, that if there is not a record in the specified resource file, then the default record is returned. By adding a blank record this is preventing the fall back.
Is there a better way of finding out what records are missing from the .es and .it resource files?
There are some tools around that should help you do what you need.
Here is one you could try for example
I am using riak (and riak search) to store and index text files. For every file I create a riak object (the text content of the file is the object value) and save it to a riak bucket. That bucket is configured to use the default search analyzer.
I would like to store (and be able to search by) some metadata for these files. Like date of submission, size etc.
So I have asked on IRC, and also given it quite some thought.
Here are some solutions, though they are not as good as I would like:
I could have a second "metadata" object that stores the data in question (maybe in another bucket), have it indexed etc. But that is not a very good solution especially if I want to be able to do combined searches like value:someword AND date:somedate
I could put the contents of the file inside a JSON object like: {"date":somedate, "value":"some big blob of text"}. This could work, but it's going to put too much load on the search indexer, as it will have to first deserialize a big json object (and those files are sometimes quite big).
I could write a custom analyzer/indexer that reads my file object and generates/indexes the metadata in question. The only real problem here is that I have a hard time finding documentation on how to do that. And it is probably going to be a bit of an operational PITA as I will need to push some erlang code to every riak node (and remember to do that when I update the cluster, when I add new nodes etc.) I might be wrong on this, if so, please, correct me.
So the best solution for me would be if I could alter the riak search index document, and add some arbitrary search fields to it, after it gets generated. Is this possible, is this wise, and is there support for this in libraries etc.? I can certainly modify the document in question "manually", as a bucket with index documents gets automatically created, but as I said, I just don't know what's the right thing to do.
Is storing a list of 1000 instance of my custom class to the session variable a good approach ? My asp.net web app need multilingual support and i am storing the labels in a table.I will have multiple users who has their own language preference and text(label content) preference.
I am thinking about loading the labels and store it in session variable and access that to use in the pages.Storing it in Application variable does not work because each customer have their own text for the labels.So is storing session a good way of doing this ? I guess i will have almost 1000 labels as of now and it may increase as the application grows.
My custom class has 2 proprties. LanguageCode and LanguageName
for some reason i cant use asp.net Resource files :(
Whats your thoughts on this ?
You should store a single set of labels for each language, then store the language name in session and use it to look up the correct labelset.
Some thoughts to notice:
If your managers have something in particular against resx's, you can store all those labels in any other format (e.g. plain text files), or in a DB.
If you have little amount of users, and loading time is extremely crucial, your managers may have a point. Other than that, they're wrong, and I would consider trying to explain that to them.
Try considering "computing" the labels at runtime (e.g. if some of them include adding prefixes, suffixes, etc. you can save only the "stems" and provide a relevant label only on demand. That will save you some server space).
If I were you I'd first spend some time trying to figure out why your resource files are not working... as long as you are setting the Thread.CurrentThread.CurrentCulture (or is it CurrentUICulture?) value to the specific culture, and have the resource files in the correct place, I can't think of any reasons as to why it wouldn't work.
Not a good idea, there are two problems with it:
It will use up the memory on the server, and reduce the number of users that can use the system
The time for each request will increase
The standard way to do this is using RESX (resource) files. You should try to fix that.
Why not use Application variable and create as many Application variables as the languages you have. Each application variable will be a superset of all the labels for that language.
Or maybe store the entire table(s) in application variable(s), instead of storing multiple session variables that, I assume, they intersect with each other.