When use FDPhysSQLiteDriverLink in SQLite? Is it necessary and how set up in delphi.
My Sqlite work without FDPhysSQLiteDriverLink.
As I recall, I needed that for certain advanced features in some other SQLite components.
Security (TFDSQLiteSecurity). The class allowing you to manage a SQLite database encryption/passwords. It requires you use FDPhysSQLiteDriverLink.
Database Validation (TFDSQLiteValidate). This component allows you to access the SQLite Validation Service to perform Sweep, Vacuum, for example. It requires you to use FDPhysSQLiteDriverLink.
Backup (TFDSQLiteBackup). This component allows you to perform database backup/restore/copy operations. It requires you to use FDPhysSQLiteDriverLink.
As for setting it up, I did nothing other than add the component to my project and then point the other components mentioned above to it. I left DriverID blank.
Related
I have a dynamodb table that was created via console and I want to enable multi-region support by adding to the list of replicationRegions using CDK.
After importing the original table using:
const table = Table.fromTableArn(this, "ImportedTable", "arn:aws:dynamodb...");
I realized I did not have access to the tables replicationRegions field as I would when creating new one.
Is there a way add to the list of replicationRegions on an imported dynamodb table using CDK?
Yes, but use cdk import instead of Table.fromTableArn.
The fromSomethingArn-type methods create *read-only* references to an external resource.* You can't use these to modify a resource. The ISomething interface constructs these methods return are useful for things like creating new permissions and targets.
The cdk import command is preview functionality to properly import existing resources into a CDK stack. A DynamoDB Table is a resource type that supports import operations. Once this one-time import completes, the "adopting" CDK stack can modify the "imported" table like any other, say, by adding replication regions.
In other words, The CDK can only modify resources it owns. To make ad-hoc modifications to an existing resource without permanently "adopting" it, use the SDKs instead.
* Earlier versions of the CDK docs did call these from... methods "importing" operations, but have been updated to use the less ambiguous term "referencing".
I have an app using React + Redux and coupled with Firebase for the backend.
Often times, I will want to add some new attributes to existing objects.
When doing so, existing objects won't get the attribute until they're modified with the new version of the app that handles those new attributes.
For example, let's say I have a /categories/ node, in there I've got objects such as this :
{
name: "Medical"
}
Now let's say I want to add an icon field with a default of "
Is it possible to update all categories at once so that field always exists with the default value?
Or do you handle this in the client code?
Right now I'm always testing the values to see if they're here or not, but it doesn't seem like a very good way to go about it. I'd like to have one place to define defaults.
It seems like having classes for each object type would be interesting but I'm not sure how to go about this in Redux.
Do you just use the reducer to turn all categories into class instances when you fetch them for example? I'm worried this would be heavy performance wise.
Any write operation to the Firebase Database requires that you know the exact path to the node that you're writing.
There is no built-in operation to bulk update nodes with a path that is only partially known.
You can either keep your client-side code robust enough to handle the missing properties, or you can indeed run a migration script to add the new property to each relevant node. But since that script will have to know the exact path of each node to write, it will likely first have to read/query the database to determine those paths. Depending on the number of items to update, it could possibly use multi-location updates after that to update multiple nodes in one call. E.g.
firebase.database().ref("categories").update({
"idOfMedicalCategory/icon": "newIconForMedical",
"idOfCommercialCategory/icon": "newIconForCommercial"
"idOfTechCategory/icon": "newIconForTech"
})
From a couple of articles I have found online
http://typecastexception.com/post/2013/10/27/Configuring-Db-Connection-and-Code-First-Migration-for-Identity-Accounts-in-ASPNET-MVC-5-and-Visual-Studio-2013.aspx
http://www.codeproject.com/Articles/790720/ASP-NET-Identity-Customizing-Users-and-Roles
I have seen it is very simple to extend the ApplicationUser class in MVC 5/Identity 2.0. It basically requires adding of property to that class and all dependent views/viewmodels etc to implement the new functionality. The only question I have remaining is due to the fact that these articles all give you examples in regards to a code first perspective. How would extending the Applicationser class work with a database first perspective?
Here is what I imagine.
1.) Change the connection string to your production database. (In my case SQL Azure)
2.) Create the tables that are normally automatically created by identity 2.0 in SQL Azure.
3.) Fill those tables with the default properties and types.
4.) Add custom properties to the AspNetUsers table. (E.G. City, Zip, etc)
5.) Add those properties to the actual ApplicationUser class
6.) Update dependent views, controllers, viewmodels etc.
Is there a more simple way in doing this?
No, there is no other way to extend ApplicationUser. Code-First is pretty much the same, only adding properties first, create migration, run migration, update your controllers/views.
I have a table with a trigger that points to an assembly:
CREATE TRIGGER [dbo].[triggername] ON [dbo].[tablename]
WITH EXECUTE AS CALLER
AFTER DELETE, UPDATE
NOT FOR REPLICATION
AS EXTERNAL NAME [Namofassembly].[blahblah].[blahblah]
We also using code first EF in .net 4.
When I use delete everything works fine but the trigger does not get called.
dataRepo.UsersPermanentAuditAssignments.Remove(isInsertFound)
When I use update I get a permissions error. This is either when I try it through the object model or a dataRepo.Database.ExecuteSqlCommand(updateSql)
System.Data.SqlClient.SqlException: The context transaction which was active before entering user defined routine, trigger or aggregate "name" has been ended inside of it, which is not allowed. Change application logic to enforce strict transaction nesting.
Everything works fine when I run the queries via the sql management studio.
I also am not able to change this configuration so while I don't care for this design I am not able to change it.
My questions are:
1> Why would the delete not get logged but work?
2> Do I need to add something extra to my repo configuration object that will allow this to work? Do I need to add some transaction like unitofwork before I start this since it has a trigger maybe?
I have figured out the causes of this issue.
It relates to having a composite primary key (station,user) and trying to update one of the values.
I could not update any column of the primary key, ie change the user assigned to a station.
The trigger failure masked the issue of not being able to update a value inside the key.
My experiments show the following for the compositekey/pk update:
Method History Trigger Result
EF.SaveChanges Enabled Fail at trigger
EF.SaveChanges Disabled Fail at trigger
EF.ExecuteSQLCommand(sql) Enabled Fail at trigger
EF.ExecuteSQLCommand(sql) Disabled Works
Unfortunately, I don't have the ability to change to a surrogate with a unique index which would work. Also, the trigger CLR prevents me from using DataBase.ExecuteSQLCommand(sql) also which I believe is actually a problem with the CLR of which I have not ability to modify.
So my advise (that I can't take) is if you get this use a surrogate key and a unique index instead of combining the 2.
If anyone knows any way to allow EF to allow you to change a value inside a composite/primary key please comment.
I'd like to try and expose some connections for a webpart at runtime, at compile time I don't know what they are, and I'm wondering if anyone can provide any suggestions on where to start.
All the examples I've read seem to do so statically using [ConnectionConsumer] and [ConnectionProvider] which obviously needs to be done in code, I don't however know what I need to expose at this point in time.
My use case would be something like a grid that uses a DataTable. The DataTable is retrieved by using a SQL statement:
select * from myTable
The connections I want to expose are when this changes to
select * from myTable where columnA = myConnection1
At this point I want to expose a connection for my WebPart called 'myConnection1', if I add multiple where clauses I want multiple connections that can be linked from other WebParts.
EDIT
An example of this would be like how ReportingServices within SharePoint handles connections. It seems to use a custom WebPartManager that determines at runtime the number, names and types of connections that need exposing.
You can create connections between web parts dynamically:
wpMgr.ConnectWebParts(wp1, cp1, wp2, cp2)
Ted Pattison: http://msdn.microsoft.com/en-us/magazine/cc188696.aspx#S6
Not sure what is dynamic in your question:
-the schema of the data that flows through the connections OR
-the creation of these connections from provider to consumer web parts at runtime?
Hope this helps?
In the end I determined that the best way was to use the IWebPartParameters interface and expose them manually.
http://blog.mindbusiness.de/blog/2011/09/05/implementation-of-iwebpartparameters-web-part/