I have 3 Global resource files:
WebResources.resx
WebResources.resx.es
WebResources.resx.it
When making changes to my application, I always add the default global resource records (English) to the WebResources.resx file. However, I don't always have the Spanish and Italian versions at the time, so these need to be added at a later stage.
My project manager has suggested that whenever adding a record into the WebResources.resx file, then a blank record should be added into the .es and .it versions. Then when it comes to finding out which records need a translation, we can order by the value and see a list of the blanks.
I like the fall-back of using Global Resources, that if there is not a record in the specified resource file, then the default record is returned. By adding a blank record this is preventing the fall back.
Is there a better way of finding out what records are missing from the .es and .it resource files?
There are some tools around that should help you do what you need.
Here is one you could try for example
Related
I've been tinkering with ASP.NET and Blazor to upload files to a server.
<InputFile
OnChange="OnInputFileChange"
multiple
accept="#GetAcceptedFileTypes()"
/>
The multiple property allows me to select more than one file type. I figured out that you can limit the number of files that the app will accept by setting the argument in InputFileChangeEventArgs.GetMultipleFiles() in the #code block; exceeding that limit will throw an exception.
It's cool that you can set that hard limit, but is there a way to prevent the user from accidentally selecting more files that they're supposed to? For example, the accept property makes it so that the upload window only shows the file types that you specify (unless they switch it to All Files (*.*)) so that they're not trying to upload invalid file types. Is there a similar functionality for the count?
I don't know what all of the available properties for InputFile are. I was hoping there'd be a list of them, like that documentation I linked. For C#, it's easy to find classes, methods, and/or parameters available. My Googling has come up short.
From a Linux bash script, I want to read the structured data stored by a particular Firefox add-on called FB-Purity.
I have found a folder called .mozilla/firefox/b8eab5j0.default/storage/default/moz-extension+++37a9788c-671d-4cae-ba5c-fbdb8788499a^userContextId=4294967295/ that contains a .metadata file which contains the string moz-extension://37a9788c-671d-4cae-ba5c-fbdb8788499a, an URL which when opened in Firefox shows the add-on's details, so I am pretty sure that this folder belongs to the add-on.
That folder contains an idb directory, which sounds like Indexed Database API, a W3C standard apparently used since last year by Firefox it to store add-ons data.
The idb folder only contains an empty folder and an SQLite file.
The SQLite file, unfortunately, does not contain much application structured data, but the object_data table contains a 95KB blob which probably contains the real structured data:
INSERT INTO `object_data` VALUES (1,'0pmegsjfoetupsf.742612367',NULL,NULL,
X'e08b0d0403000101c0f1ffe5a201000400ffff7b00220032003100380035003000320022003a002
2005300610074006f0072007500200055007205105861006e00690022002c00220036003100350036
[... 95KB ...]
00780022007d00000000000000');
Question: Any clue what this blob's format is? How to extract it (using command line or any library or Linux tool) to JSON or any other readable format?
Well, I had a fun day today figuring this out and ended creating a Python tool that can read the data from these indexedDB database files and print them (and maybe more at some point): moz-idb-edit
To answer the technical parts of the question first:
Both the name key (name) and data (value) use a Mozilla proprietary format whose only documentation appears to be its source code at this time.
The keys use a special just-for-this use-case encoding whose rough description is available in mozilla-central/dom/indexedDB/Key.cpp – the file also contains the only known implementation. Its unique selling point appears to be the fact that it is relatively compact while being compatible with all the possible index types websites may throw at you as well as being in the correct binary sorting order by default.
The values are stored using SpiderMonkey's internal StructuredClone representation that is also used when moving values between processes in the browser. Again there are no docs to speak of but one can read the source code which fortunately is quite easy to understand. Before being added to the database however the generated binary is compressed on-the-fly using Google's Snappy compression which “does not aim for maximum compression [but instead …] aims for very high speeds and reasonable compression” – probably not a bad idea considering that we're dealing with wasteful web content here.
To locate the correct indexedDB file for an extension's local storage data, one needs to resolve the extension's static ID to a so-call “internal UUID” whose value is different in every browser profile instance (to make tracking based on installed addons a lot harder). The mapping table for this is stored as a pref (“extensions.webextensions.uuids”) in the prefs.js. The IDB path then is ${MOZ_PROFILE}/storage/default/moz-extension+++${EXT_UUID}^userContextId=4294967295/idb/3647222921wleabcEoxlt-eengsairo.sqlite
For all practical intents and purposes you can read the value of a single storage key of any extension by downloading the project mentioned above. Basic usage is:
$ ./moz-idb-edit --extension "${EXT_ID}" --profile "${MOZ_PROFILE}" "${STORAGE_KEY}"
Where ${EXT_ID} is the extension's static ID (check its manifest.json file or look in about:support#extensions-tbody if your unsure), ${MOZ_PROFILE} is the Firefox profile directory (also in about:support) and ${STORAGE_KEY} is the name of the key you'd like to query (unfortunately querying all keys is not supported yet).
Also writing data is not currently supported either.
I'll update this answer as I implement more features (or drop me an issue on the project page!).
We are running CRM 2016 SP1 on-premise. We have DEV, QA, Staging, and Production environments. The solution in our DEV, QA and Staging is unmanaged but in Production is managed.
We have a requirement to change some fields' data types from single line text to multiple line text in our production environment.
I have been researching this and have found the following links:
https://debajmecrm.com/2014/04/12/change-field-data-type-in-mscrm-without-dropping-and-recreating-the-field/
https://community.dynamics.com/crm/b/workandstudybook/archive/2014/07/28/converting-single-line-text-to-multi-line-text-using-crm-sdk-s-configuration-migration-tool
Convert Single line text to Multiline text (MS CRM 2016)
From what I understand from these pages, my options are as follows:
Change the field types directly in the database
Use tools such as Configuration Manager or Attribute Manager (XrmToolbox) to export data, delete the field, create new field, and import data back for the new field.
Option 1 requires making changing directly to the database which is something we rather not do as it will cause problems with Microsoft licensing.
Option 2 requires deleting the old field and creating a new field with the same name in EVERY environment. This means the new field will have the same name but different GUID values in every environment.
Am I right in assuming that option 2 will result in errors in the future when we want to deploy a solution from one environment to another because the GUIDs for the new field are different?
Also, Option 2 requires the solution to be unmanaged in all environments. However, in our case, it is managed in Production.
With all these in mind, what are my choices? What is the best way of achieving this?
Your comments are greatly appreciated.
Kind regards
Easiest way to do this:
Hide the original field from forms, views, reports, etc.
Optional - Set the original field to be non-searchable (so it doesn't appear in advanced find).
Optional - Rename the original field so it's clear it shouldn't be used, someone people like to prefix with a 'z' so it appears at the bottom of lists.
Create your new field, put it in all the same places as the original.
Migrate the data from original to new field. A workflow executed in bulk could do, or perhaps an export, edit, import.
Optional - Delete the original field.
In terms of your options above; 1, that's unsupported (unlikely to mess your licencing, but has a good chance of ruining CRM with no way to sensible recover). 2, looks similar to my suggestion above.
I have Drupal 7.x and Advanced CSS/JS Aggregation 7.x-2.7
In folders advagg_js and advagg_css (path is sites/default/files) i have
I have too many identical files and i don't understand why...
This is a name of file in advagg_css :
css____tQ6DKNpjnnLOLLOo1chze6a0EuAzr40c2JW8LEnlk__CmbidT93019ZJXjBPnKuAOSV78GHKPC3vgAjyUWRvNg__U78DXVtmNgrsprQhJ0bcjElTm2p5INlkJg6oQm4a72o
How can I delete all these files without doing damage?
Maybe in performance/advagg/operations in box Cron Maintenance Tasks i must check
Clear All Stale Files
Remove all stale files. Scan all files in the advagg_css/js directories and remove the ones that have not been accessed in the last 30 days.
????
I hope you can help me...
Thanks a lot
I can guarantee that there are very few duplicate files in those directories. If you really want, you can manually delete every file in there; a lot of them will be generated again so you're back to having a lot of files (the css/js files get auto created on demand, just like image styles). AdvAgg is very good at preventing a 404 from happening when requesting an aggregated css/js file. You can adjust how old a file needs to be in order for it to be considered "stale". Inside of the core drupal_delete_file_if_stale() function is the drupal_stale_file_threshold variable. Changing this inside of your settings.php file to something like 2 days $conf['drupal_stale_file_threshold'] = 172800; will make Drupal more aggressive in terms of removing aggregated css and js files.
Long term if you want to reduce the number of different css/js files from being created you'll need to reduce the number of combinations/variations that are possible with your css and js assets. On the "admin/config/development/performance/advagg/bundler" page under raw grouping info it will tell you how many different groupings are currently possible, take that number and multiply it by the number of bundles (usually 2-6 if following a guide like this https://www.drupal.org/node/2493801 or 6-12 if using the default settings) and that's the number of files that can currently be generated. Multiply it by 2 for gzip. On one of our sites that gives us over 4k files.
In terms of file names the first base64 group is the file name, second base64 group are the file contents, and the third base64 group are the advagg settings. This allows for the aggregates contents to be recreated by just knowing the filename as all this additional info is stored in the database.
How efficient is reading the names of files in a directory in ASP.NET?
Background: I want to update pictures on a webserver automatically and deploy them in advance. E.g. until the 1. April I want to pick 'image1.png'. After the 1. April 'image2.png'. To achieve this I have to map every image name to a date which indicates if this image has to be picked or not.
In order to avoid mapping between file name and date in a seperate file or database the idea is to put a date in the file name. Iterating the directory and parsing the dates make me find my file.
E.g.:
image_2013-01-01.png
image_2013-04-31.png
The second one will be picked from May to eternity if no image with a later date will be dropped.
So I wonder how this solution impacts the speed of a website assuming <20 files.
If you are using something like Directory.GetFiles, that is one call to the OS.
This will access the disk to get the listing.
For less that 20 files this will be very quick. However since this data is unlikely to change very often, consider caching the name of your image.
You could store it in the application context to share it among all users of your site.