Can I access the DB queries counter programmatically in Symfony2 - symfony

I find the Symfony2 toolbar very useful, especially the DB queries counter all the way to the right.
The program I'm writing needs to access this value before returning a response, in my case, redirecting to another url.
Is it possible?

Check how you can access it in functional tests.
http://symfony.com/doc/current/cookbook/testing/profiling.html

Related

Firebase Database Migration

Coming from a SQL background, I'm wondering how does one go about doing database migration in firebase?
Assume I have the following data in firebase {dateFrom: 2015-11-11, timeFrom: 09:00} .... and now the front-end client will store and expects data in the form {dateTimeFrom: 2015-011-11T09:00:00-07:00}. How do I update firebase such that all dateFrom: xxxx and timeFrom: yyyy are removed and replaced with dateTimeFrom: xxxxyyyy? Thanks.
You have to create your own script that reads, transform and write it back. You may eider read one node at the time or read the whole DB if it is not big. You may decide to leave the logic to your client when it access to it (if it ever does)
I think you are looking for this: https://github.com/kevlened/fireway
I think is a bad idea to pollute a project with conditionals to update data on the fly.
It is a shame firestore doesn't implement a process for this as it is very common and required to keep the app and db in sync.
FWIW, since I'm using Swift and there isn't a solution like Fireway (that I know of), I've submitted a feature request to the Firebase team that they've accepted as a potential feature.
You can also submit a DB migration feature request to increase the likelihood that they create the feature.

How to best architect website when each client has own database and subdomain?

For client security and privacy reasons, we want to deploy a unique database for each client while using the same website.
I envision that during the session_start event, we would determine which database to use for them (by looking at the subdomain they come in on) and set the connection string in a session variable. Then on every page_init, we'd dynamically set any object's connection string. In code behind, we'd do the same thing with the connection string.
Is there a better approach to doing this and will setting the connection string in page_init work? Is using a session variable wise? I've tended not to ever use them except when no other solution was possible.
The problem with the model itself it is really complex and can let you with some errors specially when we are talking about changes in the database. Imagine that you need to add an extra field on the interface. if you have 100 clients this will mean updating 100 different databases. when we talk about dealing with downtime them things get even worst.
I would do with that in a light different abstract your database layer create one api that will call the database. And from the website you always call the api passing the domain that you want the data to come from.
You can ask me what advantage this will give to you. The biggest one that you will see it is when doing upgrades and maintenance. Having one api per client it is a lot better to think them having one database per client. and if you really want to have just one (I would really recommend having one per client and deploying automatically) you can have a switch on the call and base with some parameters that you pass to the api ( can be on the header like the subdomain on the header) you can chose what database to connect.
Let me give you a sample scenario and how I would suggest to approach this scenario (that is true for database or api)
I want to include a new data field. So first thing it is to add this field on the backend (api or database) deploy this new field if it is one api you can even test that calling the api and see that the new field it is now returned that is not a problem for your ui because it is just a field that it does not use. after that you change the ui to actually use this field and deploy that to production.

Can and Should I cache symfony2 getUser()

I am using Symfony ( current version 2.6.4 ) whenever I want to check if a user is loggedin ( I am also using FOSUserBundle ) I use $user = $this->getUser(); in my controller which works out just fine but if I open 10 links in 1 second this query is repeated for all 10 pages in that second, not so ideal in my option. So my question is, is there a way to cache this query for say 60 seconds and is it even advisable, will it affect new registrations or something. I am using APC as my doctrine cache but if someone knows the answer please also tell us how to use other ways incase other people also wonder how to do this. Thanks.
To start with, sql databases do a good job of automatically caching queries. So while there is some overhead in composing and sending the query to the server, the server itself will respond very quickly. You won't save much be adding another cache.
Should you still try and optimize? In your example of 10 requests per second one assumes that the requests are actually doing something besides getting the user. It's up to you to decide if caching the query will actually speed things up. In most cases the answer will be no. Trying to save every possible microsecond is called premature optimization and is something to avoid.
Having said that, it's worthwhile to look at what the security system is doing. Selected user information is stored in the session. You can use the debug profile bar to look at it. For each request, the security system pulls the user out of the session and then calls $user = $userProvider->refreshUser($user); By default, refreshUser is what causes the database to be queried.
You can easily plug in your own user provider (http://symfony.com/doc/current/cookbook/security/custom_provider.html) which just returns $user. No database interaction at all. Of course if the user's database information does change then they will need to log out and then log back in to see the changes. Or do something else to trigger a real refresh. But for many apps, not refreshing at all will work just fine.
It would also be easy enough to put a time stamp into the session. Your refreshUser method could then use the time stamp to decide if a refresh was actually needed.
So it's easy enough to eliminate the query and actually worthwhile just as a learning experience. Security is one of the more complicated components. The more you understand it the better off you will be. Customizing a user provider is one of the easier things to do.
I just saw your comment about the OAuthBundle. I have not used the bundle in awhile. Implemented my own but I'm surprised that it's hitting the oauth server on each request. If it is then this would in fact be a good use case for overriding the user provider. But I'd be surprised if it was really doing that just for user information.

Is post query permission checks possible with solr?

We have in one of our customisations implemented permission checks with dynamic authorities in Alfresco. When migrating to solr the search results for those nodes affected by our dynamic permissions became faulty. The reason seems to be that permission checks are done at query time, however our dynamic permissions are not taken into account :(
Here is a short explanations of how our dynamic authorities work:
Check if a node has an association to an authority, if the current user belongs to that authority (group) -> approve access. The node has a lot of different associations and everyone is checked and given READ or WRITE access depending on to which association it belongs.
Is there anyway to tell the Search service to do permission checking on the returned nodes instead (like lucene does)? One workaround I thought of would be to run the query as administrator, then iterate over the result and manually do the permission checks?
Could that be a way to solve it? Any other ideas you could share with me?
Alfresco will perform after-query permission checks on SOLR results when the security.anyDenyDenies property is set to true. This check will involve any dynamic authorities, i.e. it will be a standard check.
The main problem then would be to get the full results from SOLR without pre-filtering there. Other than setting the runAs user to System in a custom sub-class of org.alfresco.repo.search.impl.solr.SolrQueryLanguage (within / around super.executeQuery method call - bean(s) search.lucene.alfresco, search.solr.alfresco, search.fts.alfresco.index and search.solr.cmis in solr-search-context.xml), I see no simpler way to achieve this.
Note: This applies to Alfresco 4.2d and later - I don't know when after-query permissions for SOLR have actually been introduced, but they weren't present when 4.0 came out AFAIK.

Persistent user defined functions in SQLite3?

Does anyone know a way to register a "persistent" UDF in SQLite 3 (using C or PHP), or something equivalent? I know I can register a UDF as a callback function with the C or PHP API, but that function only lives as long as the application's handle to the database.
In MySQL, I can achieve this with someting like "CREATE FUNCTION my_udf RETURNS INTEGER SONAME 'shared_lib_containing_my_udf.so'.
The reason why I need this is that "Application A" should be notified when "Application B" inserts, updates or deletes data, using triggers that call my UDF. The application that manipulates the DB is a PHP web application. So if I can't make a UDF persistent, I'd have to register the UDF for every web request - and if the application opens more than one connection per web request, I'd even have to register the UDF multiple times per request. This would probably mean having to change the application in many places, which I really want to avoid.
If this isn't possible, does anyone have a different solution?
The only solution seems to be to modify the SQLite source to add the required functions.. This won't work in my scenario though because of deployment issues, so I'll have to abandon SQLite altogether and switch to MySQL or PostgreSQL. I wish SQLite had some kind of plugin mechanism to extend it with custom functions without recompilation, like the way UDFs are implemented in MySQL. That would probably make it considerably less "lite", though. As much as I love SQLite, it's just not the right tool for everything.

Resources