I am using aurelia validation V4 in my SPA.
I am trying to re-configure the aurelia validation rules depending on an option selected in a dropdown.
I could not accomplish this because, each time newly added rules are
appending to the existing set of rules.
What I am trying to do is, remove existing rules for a property and assign new rules.
how to re-configure Aurelia validation rules ?
the returned ValidationGroup has a destroy() function on it:
this.validate = this.validation.on(this)
.ensure('blah')
.isNotEmpty();
this.validate.destroy();
That should detach the observers to the values, meaning that any old rules will no longer be enforced. I couldn't see whether this actually frees the memory associated with the ValidationGroup, so you might want to watch out for that. (though it still might)
Related
I have a PASOE Business Class Entity setup as a Web Service. I'm trying to determine how to create a custom header that will allow me to pass in a hashed token. Is this something that I need to upgrade to 11.7.4 for DOH(OpenEdge.Web.DataObject.DataObjectHandler)? Or is this something that I simply add into a method that's defined in the class? Apologies, for the lack of code to illustrate my situation, but I'm not sure where to begin.
If you're using a Business Entity with the web transport then you're using the DOH, and the below applies. If you're using the rest transport then you are not using the DOH, and are more limited in your choices.
There is doc available on the DOH at https://documentation.progress.com/output/oe117sp/index.html#page/gssp4/openedge-data-object-handler.html - it's for 11.7.4 but largely applies to all versions (that is, from 11.6.3+). This describes the JSON mapping file, which you'll need to create an override to the default, generated one.
If you want to use the header's value for all operations, then you may want to use one of the DOH's events. There's an example of event handlers at https://github.com/PeterJudge-PSC/http_samples/blob/master/web_handler/data_object_handler/DOHEventHandler.cls ; you will need to start that handler in a session startup procedure using new DOHEventHandler() (the way that code is written is that it makes itself a singleton).
You can now add handling code for the Invoking event which fires before the business logic is run.
If you want to pass the header value into the business logic you will need to
Copy the generated mapping file <service>.gen to a <service.map> , in the same folder. "gen" files are generated and will be overwritten by the tooling
In the .map file, add a new arg entry. This must be in the same order as the parameters to the BE's method.
The JSON should look something like the below. this will read the value of the header and pass it as an input parameter into the method.
{ "ablName": "<parameter_name>",
"ablType": "CHARACTER",
"ioMode": "INPUT",
"msgElem": {"type": "HEADER", "name": "<http-header-name>"}
}
I have an app using React + Redux and coupled with Firebase for the backend.
Often times, I will want to add some new attributes to existing objects.
When doing so, existing objects won't get the attribute until they're modified with the new version of the app that handles those new attributes.
For example, let's say I have a /categories/ node, in there I've got objects such as this :
{
name: "Medical"
}
Now let's say I want to add an icon field with a default of "
Is it possible to update all categories at once so that field always exists with the default value?
Or do you handle this in the client code?
Right now I'm always testing the values to see if they're here or not, but it doesn't seem like a very good way to go about it. I'd like to have one place to define defaults.
It seems like having classes for each object type would be interesting but I'm not sure how to go about this in Redux.
Do you just use the reducer to turn all categories into class instances when you fetch them for example? I'm worried this would be heavy performance wise.
Any write operation to the Firebase Database requires that you know the exact path to the node that you're writing.
There is no built-in operation to bulk update nodes with a path that is only partially known.
You can either keep your client-side code robust enough to handle the missing properties, or you can indeed run a migration script to add the new property to each relevant node. But since that script will have to know the exact path of each node to write, it will likely first have to read/query the database to determine those paths. Depending on the number of items to update, it could possibly use multi-location updates after that to update multiple nodes in one call. E.g.
firebase.database().ref("categories").update({
"idOfMedicalCategory/icon": "newIconForMedical",
"idOfCommercialCategory/icon": "newIconForCommercial"
"idOfTechCategory/icon": "newIconForTech"
})
Does an auto delete functionality is available to delete row(s) from a specific table when a condition is met?
Note: I'm using Symfony 3.3
Else if such a method doesn't exist,is there a dql alternative even for a static method(it doesn't have to be automatic,what i mean is a button click would trigger the action in the controller)??
Example:
I have an entity named Deal and i want to auto delete any instance of 'Deal' in my Database that has passed the delay which is expressed in variable number of days.
There is not, but if as part of your domain logic you need to perform automatic deletes, maybe you should look into EventListeners or Subscribers with Symfony Event Dispatcher component, so that actions may be performed when an event is fired in your system. You can easily inject the entity manager to one of those, and place your logic there.
https://symfony.com/doc/current/event_dispatcher.html
I have a table with a trigger that points to an assembly:
CREATE TRIGGER [dbo].[triggername] ON [dbo].[tablename]
WITH EXECUTE AS CALLER
AFTER DELETE, UPDATE
NOT FOR REPLICATION
AS EXTERNAL NAME [Namofassembly].[blahblah].[blahblah]
We also using code first EF in .net 4.
When I use delete everything works fine but the trigger does not get called.
dataRepo.UsersPermanentAuditAssignments.Remove(isInsertFound)
When I use update I get a permissions error. This is either when I try it through the object model or a dataRepo.Database.ExecuteSqlCommand(updateSql)
System.Data.SqlClient.SqlException: The context transaction which was active before entering user defined routine, trigger or aggregate "name" has been ended inside of it, which is not allowed. Change application logic to enforce strict transaction nesting.
Everything works fine when I run the queries via the sql management studio.
I also am not able to change this configuration so while I don't care for this design I am not able to change it.
My questions are:
1> Why would the delete not get logged but work?
2> Do I need to add something extra to my repo configuration object that will allow this to work? Do I need to add some transaction like unitofwork before I start this since it has a trigger maybe?
I have figured out the causes of this issue.
It relates to having a composite primary key (station,user) and trying to update one of the values.
I could not update any column of the primary key, ie change the user assigned to a station.
The trigger failure masked the issue of not being able to update a value inside the key.
My experiments show the following for the compositekey/pk update:
Method History Trigger Result
EF.SaveChanges Enabled Fail at trigger
EF.SaveChanges Disabled Fail at trigger
EF.ExecuteSQLCommand(sql) Enabled Fail at trigger
EF.ExecuteSQLCommand(sql) Disabled Works
Unfortunately, I don't have the ability to change to a surrogate with a unique index which would work. Also, the trigger CLR prevents me from using DataBase.ExecuteSQLCommand(sql) also which I believe is actually a problem with the CLR of which I have not ability to modify.
So my advise (that I can't take) is if you get this use a surrogate key and a unique index instead of combining the 2.
If anyone knows any way to allow EF to allow you to change a value inside a composite/primary key please comment.
I am trying to change which DB connection to used based on several conditions inside a custom module hook, aptly named mymodule_init()
hook_init() seemed a logical place to put this functionality because it's called so early in the bootstrap game, before any DB queries???
So I have several connections in a pool and which one is used is determined by the module. For the life of I can't get the system to persist the DB - seems to be resetting itself back to 'default' after this hook is executed. Searching the codebase has little effect as well only one or two calls to db_set_active() are made.
ANy ideas? What hook should I override to change the DB connection at runtime before any DB activity has be done???
Cheers,
Alex
Hardly is hook_init "early in the game" and certainly not the first to fire database queries. The bootstrap order is: load configuration, try to serve the page from cache, initialize the database, load variables, load session, page header. The first hook to fire is hook_boot either if the page cache has a hit or in page header -- by then at least the variable init phase have fired a query either to load the variables from the database (or to retrieve them from cache but you can't rely on cache and the default cache is database anyways). All is not lost, however. You can either put your code right in settings.php or write a small cache handler, something like this:
class HackyDatabaseCache extends DrupalDatabaseCache {
function __construct($bin) {
// your code finding the database here.
parent::__construct($bin);
}
}
add $conf['cache_backends'][] = 'path/to/hackydatabasecache.inc'; and $conf['cache_class_cache_page'] = 'HackyDatabaseCache'; to your settings.php. This will make sure your code fires before any queries. If you are using memcache or mongodb for caching, then extend that with the same code just change which class is extended.