8 - How to override the DatabaQueue from module - drupal

I have to override the default implementation of the DatabaseQueue in Drupal 8. The reasons for this are not important.
I was looking at the QueueFactory and I learnt that for each queue worker there can be a different QueueInterface implementation. If it's not specified they fall back to the DatabaseQueue (well in fact one can specify different queue factory but this simplification is quite accurate anyway).
The QueueFactory uses the Settings object as it's source of configuration for the queues:
$this->settings->get('queue_service_' . $name, $this->settings->get('queue_default', 'queue.database'));
The problem is (as far as I can tell) that Settings object takes configuration data from the sites/*/settings.php file. In fact, if I extend this file with queue configuration, like this one:
$settings['queue_service_my_custom_queue_worker'] = 'my_module.my_custom_queue_factory';
then it works fine.
But here's the deal. I'm creating a module that will be distributed to many clients. This approach for editing the settings.php file is not ideal. Imagine asking everyone to make this change. It's very prone to errors. So, is there a way to extend those settings from my module?
I tried using the configuration overrides but it doesn't work for this case.

You could decorate the queue service to provide a custom behavior of its get() method for your custom queue.
See this doc: https://www.phase2technology.com/blog/using-symfony-service

Related

What is the point of using the initParams parameter of the #WebServlet annotation on a Servlet?

after searching google for two days now I am left pretty confused.
I can imagine why use an XML file (web.xml) to define init parameters - if the file is fed to the server container without recompilation it enables us to readapt parameters much more easily, at runtime - or at most to require a restart of the server app.
But if we define init parameters in an annotation, meaning: in a java source file, it is not available on the production server for us to modify, and will require a re-compilation. In that case, why not just use a constant?
Please refrain from lecturing me about how bad/good the use of constants is in general, my question is very specific to init params in annotations. I understand that you may feel obligated to spread your religion to the world and show me the light, but don't. Just don't.

Serving static content programmatically from Servlet - does the spec have anything available or i should roll a custom one?

I have a db with original file names, location to files on disk, meta data like user that owns file... Those files on disk are with scrambled names. When user requests a file, the servlet will check whether he's authorized, then send the file in it's original name.
While researching on the subject i've found several cases that cover that issue, but nothing specific to mine.
Essentially there are 2 solutions:
A custom servlet that handles headers and other stuff the Default Servlet containers don't: http://balusc.omnifaces.org/2009/02/fileservlet-supporting-resume-and.html
Then there is the quick and easy one of just using the Default Servlet and do some path remapping. For ex., in Undertow you configure the Undertow subsystem and add file handlers in the standalone.xml that map http://example.com/content/ to /some/path/on/disk/with/files .
So i am leaning towards solution 1, since solution 2 is a straight path remap and i need to change file names on the fly.
I don't want to reinvent the hot water. And both solutions are non standard. So if i decide to migrate app server to other than Wildfly, it will be problematic. Is there a better way? How would you approach this problem?
While your problem is a fairly common one there isn't necessarily a standards based solution for every possible design challenge.
I don't think the #2 solution will be sufficient - what if two threads try to manipulate the file at the same time? If someone got the link to the file could they share it?
I've implemented something very similar to your #1 solution - the key there is that even if the link to the file got out no one could reuse the link as it requires security. You would just "return" a 401 or 403 for the resource.
Another possibility depends on how you're hosted. Amazon S3 allows you to generate a signed URL that has a limited time to live. In this way your server isn't sending the file directly. It is either sending a redirect or a URL to the front end to use. Keep the lifetime at like 15 seconds (depending on your needs) and then the URL is no longer valid.
I believe that the other cloud providers have a similar capability too.

How can I reuse a server-side TCP endpoint for multiple consumers?

I'm a beginner trying to understand TCP, and I'm using Rust. If I create a new listener and bind it to an address
let tcplistener = TcpListener::bind("127.0.0.1:55555").unwrap();
I can tcplistener.accept() new connections between 127.0.0.1:55555 and some other endpoint on the client.
In my case, tcplistener lives within an instance of a struct representing a plugin. Each plugin should be controllable from its own browser tab. There is one connection (endpoint-pair) per plugin, with one endpoint always being 127.0.0.1:55555. The plugins run in a single thread with non-blocking listeners and streams. I use websockets, but I'm not sure if this question is specific to websockets.
What I'm doing now is
instantiate plugin A
accept the first connection from a browser tab to plugin A
after that, assign the tcplistener field in plugin A to a newly created listener with arbitrary OS-assigned port
This seems to work; if I instantiate a new plugin B afterwards, I can create a listener bound to 127.0.0.1:55555 and accepting a connection. If I don't create a new listener with different address/port, then I get the "Address already in use" error.
This is obviously not a good solution since it occupies all the other ports for no reason. Is there a better way?
A comment said:
Why does each plugin have a TcpListener? Why not have one component with the listener, call accept, then hand off the returned TcpStream to each constructed plugin?
That does sound good, but where would that TcpListener be stored, and how does it hand off the streams? Possibilities I see for storing:
The host. I cannot modify the plugin host, I'm just a plugin author.
One dedicated plugin. The problem I see is that plugins can't access any information stored in another plugin, so I wouldn't know how to do that.
A separately running process. I could imagine running a server separately and let the plugins be clients. Users could connect their browser to the server, which somehow does the proxying to the plugin. Sounds reasonable, but the inconvenience here is that plugin users would have to install a server as a separate package. So I'd really like to avoid that. Although I suppose launching the server could be done automatically at plugin instantiation, maybe that's the way to go?
One workaround, if I understand all of your limitations correctly, is to use Option. Option is used for precisely the case of "something or not something".
Here, you could have an Option<TcpListener>. On a freshly-initialized plugin, this would be set to Some(...), and once accepted, would transition to None.
This does have a number of downsides:
There's a time period where there are no listeners.
You have to deal with the possibility of the listener being None.
You can't start a second plugin before the first one accepts something.
Some kind of parent-child relationship is probably better, or even limiting to a singleton plugin, if plausible.

ASP.NET (MVC) Serving images

I am creating a MVC 3 application (although just as applicable to other technologies e.g. ASP.NET Forms) and was just wondering if it is feasible (performance wise) to serve images from code rather than using the direct virtual path (like usual).
The idea is that I improve the common method of serving files to:
Apply security checks
Standardised method of serving files based on route values
Returning modified images (if requested) e.g. different dimentions (ok this would only be used sparingly so don't relate this to the performance question above).
Perform business logic before allowing access to the resource
I know HOW to do it but I don't know IF I should do it.
What are the performance issues (if any)
Does something weird happen e.g. images only load sequentially (maybe that's how HTML does it currently i am not sure - exposing my ignorance here).
Anything else you can think of.
Hope this all makes sense!
Thanks,
Dan.
UPDATE
OK - lets get specific:
What are the performance implications for using this type of method for serving all images in MVC 3 using a memory stream? Note: the image url would be GenericFetchImage/image1 (and just for simplicity - all my images are jpegs).
public FileStreamResult GenericFetchImage(string RouteValueRefToImage)
{
// Create a new memory stream object
MemoryStream ms = new MemoryStream();
// Go get image from file location
ms = GetImageAndPutIntoMemoryStream(RouteValueRefToImage);
// return the output as a file
return new FileStreamResult(ms, "image/jpeg");
}
I know that this method works, because I am using it to dynamically generate an image based on a session value for a captcha image. It's pretty neat - but I would like to use this method for all image retrieval.
I guess I am wondering in the above example if this is ok to do or whether it requires more processing to perform and if so, how much? For example, if the number of visitors were to multiply by 1000 for example, would the server be then processingly burdened in the delivery of images..
THANKS!
A similar question was asked before (Can an ASP.Net MVC controller return an Image?) and it appears that the performance implications are very small to serving images out of actions vs directly. As the accepted answer noted, the difference appears to be on the order of a millisecond (in that test case, about 13%). You could re-run the test locally and see what the difference is on your hardware.
The best answer to your question of if you should be using it is from this answer to (another) similar question (emphasis mine):
DO worry about the following: you will need to re-implement a caching strategy on the server, since IIS manages that for static files requested directly. You will also need to make sure you manage your client-side caching with the correct headers included in the response. Ultimately, just ask yourself if re-inventing a method of serving static files from a server is something that serves your application's needs.
To address the specific cases you provided with the question:
Apply security checks
You can already do this using the IIS 7 integrated pipeline. Relevant bit from documentation:
Allowing services provided by both native and managed modules to apply to all requests, regardless of handler. For example, managed Forms Authentication can be used for all content, including ASP pages, CGIs, and static files.
Standardised method of serving files based on route values
If I'm reading the documentation correctly you can insert a module early enough in the pipeline to re-write incoming URLs to point directly to static resources and let IIS handle the request from there. (For the sake of completeness there also this related question regarding mapping routes to mages: How do I route images using ASP.Net MVC routing?)
Empowering ASP.NET components to provide functionality that was previously unavailable to them due to their placement in the server pipeline. For example, a managed module providing request rewriting functionality can rewrite the request prior to any server processing, including authentication.
There are also some pretty powerful URL rewrite features that come with IIS more or less out of the box.
Returning modified images (if requested) e.g. different dimentions (ok this would only be used sparingly so don't relate this to the performance question above).
It looks like a module that does this is already available for IIS. Not sure if that would fall under serving images from code or not though, I guess it might.
Perform business logic before allowing access to the resource
If you're performing business logic to generate said resources (like a chart) or as you mentioned a captcha image then yeah, you basically have no choice but to do it this way.

Determining the set of message destinations at runtime in BizTalk application

I’m a complete newbie at BizTalk and I need to create a BizTalk 2006 application which broadcasts messages in a specific way. I’m not asking for a complete solution, but for advise and guidelines, which capabilities of BizTalk I should use.
There’s a message source, for simplicity, say, a directory where the user adds files to publish them. There are several subscribers, each having a directory to receive published files. The number of subscribers can vary in the course of exploitation of the program. There are also some rules which determine if a particular subscriber needs to receive a particular file, based on the filename. For example, each subscriber has a pattern or mask of filename which files they receives must match. Those rules (for example, patterns) can change in time as well.
I don’t know how to do this. Create a set of send ports at runtime, each for each destination? Is it possible? Use one port changing its binding? Would it work correctly with concurrent sendings? Are there other ways?
EDIT
I realized my question may be to obscure and general to prefer one answer over another to accept. So I just upvoted them.
You could look at using dynamic send ports to achieve this - if your subscribers are truly dynamic. This introduces a bit of complexity since you'll need to use an orchestration to configure the send port's properties based on your rules.
If you can, try and remove the complexity. If you know that you don't need to be truly dynamic when adding subscribers (i.e. a subscriber and it's rules can be configured one time only) and you have a manageable number of subscribers then I would suggest configuring each subscriber using it's own send port and use a filter to create subscriptions based on message context properties. The beauty of this approach is that you don't need to create and deploy an orchestration and this becomes a highly performant and scalable solution.
If the changes to the destination are going to be frequent, you are right in seeking a more dynamic solution. One nice solution is using dynamic send ports and the Business Rules Engine. You create rule set for the messages you are receving. This could be based on a destination property or customer ID in the message. Using these facts, the rules engine can return a bunch of information like file mask, server name, ip address of deleiver server, etc. You can thenuse this information to configure the dynamic send in the orchestration. The real nice thing here is that you can update the rule set in the rules engine without redeploying the whole solution. As a newb, these are some advanced concepts, but not as diificult as you may think.
For a simpler solution, you might want to look at setting the FILE Send adapters properties via it's Propery Schema (ie. File name, Directory, etc.). You could pull these values from a database with a helper class inside an expresison shape. On each message ogig out, use the property shcema to set where the message will be sent and named. This way, you just update the database as things change.
Good Luck!

Resources