Symfony Cache Component - Redis Adapter - symfony

I have implemented the Symfony Cache Component using RedisAdapter. Now we like to use a colon as separator in cache keys (e.g. some:cache:key:25). Just like Redis recommends.
I get an Exception saying "contains reserved characters {}()/\#: etc.". This is explained in Symfony Documentation
(https://symfony.com/doc/3.4/components/cache/cache_items.html) that those are reserved characters in PSR-6.
I like to know if there is a way around that? Because i am busy refactoring cache logic using Symfony Cache Component. But keys are already defined, so i am not able to change them without breaking conventions. 😭

As you noted : is a reserved character in the PSR-6 cache standard, which Symfony's cache component builds on.
If you want to keep them in your code, you could write an adapter that takes your keys and replaces the : with something else before passing it to the regular cache.
So for example you could write an adapter that looks something like this:
class MyCacheAdapter implements AdapterInterface
{
private $decoratedAdapter;
public function __construct(AdapterInterface $adapter)
{
$this->decoratedAdapter = $adapter;
}
public function getItem($key): CacheItemInterface
{
$key = str_replace(':', '.', $key);
return $this->decoratedAdapter->getItem($key);
}
...
}
For all other methods you can just proxy the call to the decorated service and return the result. It's a bit annoying to write, but the interface demands it.
In your service configuration you can configure it like this:
services:
App\Cache\MyCacheAdapter:
decorates: 'Symfony\Component\Cache\Adapter\RedisAdapter'
arguments:
$adapter: '#app.cache.adapter.redis'
This configuration is only a rough outline both argument and the class names might have to be adjusted. In any case with this service decoration your adapter wraps around the original redis adapter and then when you configure it to be used by the cache component it should work fine, that your existing keys like some:cache:key25 will be converted to some.cache.key25 before they are passed into the cache component, so before the error message happens.

Related

Differences between different methods of Symfony service collection

For those of you that are familiar with the building of the Symfony container, do you know what is the differences (if any) between
Tagged service Collector using a Compiler pass
Tagged service Collector using the supported shortcut
Service Locator especially, one that collects services by tags
Specifically, I am wondering about whether these methods differ on making these collected services available sooner or later in the container build process. Also I am wondering about the ‘laziness’ of any of them.
It can certainly be confusing when trying to understand the differences. Keep in mind that the latter two approaches are fairly new. The documentation has not quite caught up. You might actually consider making a new project and doing some experimenting.
Approach 1 is basically an "old school" style. You have:
class MyCollector {
private $handlers = [];
public function addHandler(MyHandler $hamdler) {
$handlers[] = $handler;
# compiler pass
$myCollectorDefinition->addMethodCall('addHandler', [new Reference($handlerServiceId)]);
So basically the container will instantiate MyCollector then explicitly call addHandler for each handler service. In doing so, the handler services will be instantiated unless you do some proxy stuff. So no lazy creation.
The second approach provides a somewhat similar capability but uses an iterable object instead of a plain php array:
class MyCollection {
public function __construct(iterable $handlers)
# services.yaml
App\MyCollection:
arguments:
- !tagged_iterator my.handler
One nice thing about this approach is that the iterable actually ends up connecting to the container via closures and will only instantiate individual handlers when they are actually accessed. So lazy handler creation. Also, there are some variations on how you can specify the key.
I might point out that typically you auto-tag your individual handlers with:
# services.yaml
services:
_instanceof:
App\MyHandlerInterface:
tags: ['my.handler']
So no compiler pass needed.
The third approach is basically the same as the second except that handler services can be accessed individually by an index. This is useful when you need one out of all the possible services. And of course the service selected is only created when you ask for it.
class MyCollection {
public function __construct(ServiceLocator $locator) {
$this->locator = $locator;
}
public function doSomething($handlerKey) {
/** #var MyHandlerInterface $handler */
$handler = $serviceLocator->get($handlerKey);
# services.yaml
App\MyCollection:
arguments: [!tagged_locator { tag: 'app.handler', index_by: 'key' }]
I should point out that in all these cases, the code does not actually know the class of your handler service. Hence the var comment to keep the IDE happy.
There is another approach which I like in which you make your own ServiceLocator and then specify the type of object being located. No need for a var comment. Something like:
class MyHandlerLocator extends ServiceLocator
{
public function get($id) : MyHandlerInterface
{
return parent::get($id);
}
}
The only way I have been able to get this approach to work is a compiler pass. I won't post the code here as it is somewhat outside the scope of the question. But in exchange for a few lines of pass code you get a nice clean custom locator which can also pick up handlers from other bundles.

Mediatr handlers are they singletons?

I am using the Mediatr in my .Net Core project and I was wondering if the handler's in the Mediatr are singleton's or are the new instances for every Send request; I know the Mediatr is a Singleton' but for the handlers it uses for a command or query, I am not very sure.
I tend to think they would also be singletons; but just wanted to double confirm.
In fact, lifetime of all those things are it's well documented
https://github.com/jbogard/MediatR.Extensions.Microsoft.DependencyInjection/blob/master/README.md
Just for reference: IMediator is transient (not a singleton), IRequestHandler<> concrete implementations is transient and so on so actually it's transient everywhere.
But be aware of using Scoped services with Mediatr handlers, it works not as expected, more like singletons, unless you manually create a scope.
For the handlers, after following the source code, it looks like they are all added as Transient.
https://github.com/jbogard/MediatR.Extensions.Microsoft.DependencyInjection/blob/1519a1048afa585f5c6aef6dbdad7e9459d5a7aa/src/MediatR.Extensions.Microsoft.DependencyInjection/Registration/ServiceRegistrar.cs#L57
services.AddTransient(#interface, type);
For the IMediator itself, it looks like it is lifetime by default :
https://github.com/jbogard/MediatR.Extensions.Microsoft.DependencyInjection/blob/1519a1048afa585f5c6aef6dbdad7e9459d5a7aa/src/MediatR.Extensions.Microsoft.DependencyInjection/Registration/ServiceRegistrar.cs#L223
services.Add(new ServiceDescriptor(typeof(IMediator), serviceConfiguration.MediatorImplementationType, serviceConfiguration.Lifetime));
Note that the service configuration is a configuration object that unless somehow you change it along it's default path, will be set to transient too :
public MediatRServiceConfiguration()
{
MediatorImplementationType = typeof(Mediator);
Lifetime = ServiceLifetime.Transient;
}
Using core you can manually register your handlers and use whatever scope you want. So for example:
services.AddScoped<IPipelineBehavior<MyCommand>, MyHandler>();
We actually wrap Mediatr so we can add various bits and bobs so it ends up being a registration extension like this (CommandContect/QueryContext holds various stuff we use all the time and ExecutionResponse is a standard response so we can have standard post handlers that know what they are getting):
public static IServiceCollection AddCommandHandler<THandler, TCommand>(this IServiceCollection services)
where THandler : class, IPipelineBehavior<CommandContext<TCommand>, ExecutionResponse>
where TCommand : ICommand
{
services.AddScoped<IPipelineBehavior<CommandContext<TCommand>, ExecutionResponse>, THandler>();
return services;
}
Which is used like this:
services.AddCommandHandler<MyHandler, MyCommand>();
We have similar for queries (AddQueryHandler<.....)
Hope that helps

BreezeJS modified route not working

My application has two databases with exactly the same schema. Basically, I need to change the DbContext based on what data I'm accessing. Two countries are in one Db and 4 countries in the other. I want the client to decide which context is being used. I tried changing my BreezeWebApiConfig file so that the route looks like this:
GlobalConfiguration.Configuration.Routes.MapHttpRoute(
name: "BreezeApi",
routeTemplate: "breeze/{dbName}/{controller}/{action}/{id}",
defaults: new {id=RouteParameter.Optional,dbName="db1"}
);
I added the string to the controller actions:
[HttpGet]
public string Metadata(string dbName="")
{
return _contextProvider.Metadata();
}
And changed the entityManager service Name.
Now when the client spins up, it accesses the corrent metadata action and I get a message:
Error: Metadata query failed for: /breeze/clienthistory/kenya/Metadata. Unable to either parse or import metadata: Type .... already exists in this MetadataStore
When I go to the metadata url from the browser, I get the correct metadata (exactly the same as when I remove the {dbName} segment from the route). If I remove the {dbName} segment from the route I get no error and everything works fine
(I have not started implementing the multiple contexts yet -- I am just trying to make the additional segment work).
Thanks.
I think the problem is that your Breeze client is issuing two separate requests for the same metadata, once under each of the two "serviceNames". Breeze tries to blend them both into the same EntityManager.metadataStore ... and can't because that would mean duplication of EntityType names.
One approach that should work is to begin your application by fetching the metadata immediately upon app start and then adding all the associated "DataServiceNames" to the MetadataStore.
Try something along these lines (pseudo-code):
var manager;
var store = new breeze.MetadataStore();
return store.fetchMetadata(serviceName1)
.then(gotMetadata)
.catch(handleFail);
function gotMetadata() {
// register the existing metadata with each of the other service names
store.addDataService(new breeze.DataService(serviceName2));
... more services as needed ...
manager = new breeze.EntityManager({
dataService: store.getDataService(serviceName1), // service to start
metadataStore: store
});
return true; // return something
}
Alternative
Other approaches to consider don't involve 'db' placeholder in the base URL nor any toying with the Web API routes. Let's assume you stay vanilla in that respect with your basic service name
var serviceName = '/breeze/clienthistory/';
..
For example, you could add an optional parameter to your routes (let's call it db) as needed via a withParameters clause.
Here is a query:
return new breeze.EntityQuery.from('Clients')
.where(...)
.withParameters({db: database1}); // database1 == 'kenya'
.using(manager).execute()
.then(success).catch(failed);
which produces a URL like:
/breeze/clienthistory/Clients/?$filter=...&db=kenya
It makes an implicit first-time-only metadata request that resolves to:
/breeze/clienthistory/Metadata
Your server-side Web API query methods can expect db as an optional parameter:
[HttpGet]
public string Metadata(string db="")
{
... do what is right ...
}
Saves?
I assume that you also want to identify the target database when you save. There are lots of ways you can include that in the save request
in a custom HTTP header via a custom AJAX adapter (you could do this for queries too)
in a query string parameter or hash segment on the saveChanges POST request URL (again via a custom AJAX adapter).
in the tag property of the saveOptions object (easily accessed by the server implementation of SaveChanges)
in the resourceName property of the saveOptions object (see "named save")
You'll want to explore this variety of options on your own to find the best choice for you.

How to rollback any transaction when doing test with phpUnit in symfony2

I'm testing the controllers using the crawler, but when I'm posting a form that doesn't generate any errors, it save the form in the database.
How can I prevent him to do so without changing the controller, and without testing something else.
Is there best practice about this kinds of test ?
I tried the rollback, but in the ControllerTest there is no more active transactions
You need to write your own test client class extending Symfony\Bundle\FrameworkBundle\Client.
It's because default client doesn't share connection object between requests (so you can't use transactions outside test client). If you extend test client you can handle transaction by your own.
In your client class you need make static connection object, and override method doRequest() to avoid creating new connection object every time but use our static one instead.
It's well described here:
http://alexandre-salome.fr/blog/Symfony2-Isolation-Of-Tests
When you have your own doRequest method all you need is handle transaction, so you wrap handle() method with begin and rollback. Your doRequest method could look sth like that:
protected function doRequest($request)
{
// here you need create your static connection object if it's doesn't exist yet
// and put it into service container as 'doctrine.dbal.default_connection'
(...)
self::$connection->beginTransaction();
$response = $this->kernel->handle($request);
self::$connection->rollback();
(...)
return $response
}
You can read the documentation of PHPUnit for database testing
http://www.phpunit.de/manual/3.6/en/database.html
You will need setup your database and teardown the changes you made.
If you think that the above is too complicated maybe you are interested in make a mockup of your database layer
http://www.phpunit.de/manual/3.6/en/test-doubles.html
Mockup is create a custom object based in the original object where put your own test controls. Probably in this case you are interested in mockup the Entity Manager of Doctrine

Where to put entity 'helper functions'?

I am having trouble understand a key concept of Symfony 2.
I am working on a website where users can create content which then can be sent to other people, using a secret url. Something like www.yoursite.com/{secret-identifier-string}.
I plan on doing this as follows:
Persist the user's content.
Create the identifier string containing the content id and the creation timestamp (or any other content which will never change again, as extra safety feature) with a two-way encryption method (like mcrypt_encrypt).
Create the link and display it to the user to give it away
Whenever a url is called, the identifier string will be decrypted. If the provided timestamp matches the corresponding value of the content id row, the page will be displayed.
My questions are:
Would you consider this a good procedure in general?
Outside Symfony2 I would create helper methods like getIdentifierString() and getContentPageLink(). Where do I put the corresponding code in Symfony2? Does it belong inside the entity class? If so I am having problems because I am using a service class for encryption. The service is only available in the controller.
Thanks a lot!
With all due respect to DI and service oriented design, namespacing and all the good stuff we benefit from,
I still refuse to type or read:
$this->mysyperfancyservice->dowhatevertheseviceissupposedtodowith($the_entity);
where a simple
do($the_entity);
is all I need on 150 instances across my project, where do is something everyone working on the project will know about.
That is what helper is meant for - readability and simplicity. As long as it doesn't depend on other services though.
My solution for that is in basic Composer feature:
"autoload": {
...
"files": [ "src/helper/functions.php" ]
}
I put a very limited number of extremely useful functions in src/helper/functions.php file, and add it to project like that.
In order for the function to become available project-wide, it is required to run:
composer dump-autoload
The general idea is that you create "helper classes" rather than "helper functions". Those classes may have dependencies on other classes in which case you'll define them as a service.
It sounds like your methods do have dependencies (on encryption) so you can make a new service that is responsible for generating links. In it's constructor it would take the encryptor and the methods would be passed the entity to generate a link/string for.
for example, your service:
<service id="app_core.linkifier" class="App\CoreBundle\Linkifier">
<argument type="service" id="the.id.for.encryptor"/>
</service>
and class:
class Linkifier
{
private $encryptor;
public function __construct(Encryptor $encryptor)
{
$this->encryptor = $encryptor;
}
public function generateContentPageLink(Entity $the_entity)
{
return $this->encryptor->encrypt($the_entity);
}
}

Resources