Force a parse call on consecutive extends/includes - symfony

I have disabled template caching, via these settings:
when#dev:
twig:
auto_reload: true
cache: false
Here is the rub, I need my node visitors to run each time a template is extended.
I have a rare instance, where the same template is used several times, and the code is breaking because the assumption was made, they'd run every single time with the above configuration.
This appears to not be the case, which makes sense for a default setting. Why would you traverse a template which has already been traversed and visited?
In my case, I actually need this inefficient behaviour. Is there any way to tweak Twig service into always calling parse() on extended or included templates, which are invoked one or more times in a single request?
Again, I have several template includes which all extend from a simple base template, but the node visitors are only invoked on the first inclusion. The code is expecting to be run on every inclusion, regardless of how many times.
Hopefully I made sense :)

Related

Read RTK-Query state in a selector without having to pass the cache key

Very simple:
Let’s say we use RTK-Query to retrieve the current weather.
For that, I pass to the endpoint the arg ‘Paris’ as city.
It will serve the current weather of my « game ».
Then later, in a Redux selector, I need to compute some derived state based on that current weather.
How to read the state without having to pass the cache key « Paris »?
Indeed, that knowledge of « Paris » was only necessary at the beginning of the app.
It seems that with RTK-Query we’re stuck since you have to pass the argument that was used (the cache key) to the endpoint#select method.
Am I right in saying that RTK-Query does not currently allow that kind of state reading:
« select that current (and single) store entry X whatever the argument that was needed at loading time is ».
No, since that's an edge case.
Usually, there are multiple cache entries per endpoint, and there is also no concept of a "latest entry" or something, since multiple different components can render at the same time, displaying different entries for the same endpoint - the concept of a "latest" there would come down to pretty random React rendering order.
The most common solution would be to just safe "Paris" somewhere in global state to have it readily available, or to write your selector against RTKQ store internals by hand (although there might be changes to the state internals in the future).

How to use Drupal rules to adapt content access permissions for nodes that are older than 1 week?

I have a special content type named "example". I want to show new nodes of this type to anonymous users of my site.
What I need: after 1 week the node was created, content access permissions (Content Access module is installed) are changed that only users with particular role are able to see this node.
Should this be triggered on cron or what? Or just how to do something to nodes that are older than 1 week?
Could you provide some instructions on how to do that? Because I'm new to the Rules module and have no any ideas.
You should be able to do this with Rules (see this question, not exactly what you want but close), but I'd go for a tiny custom module implementing hook_cron, where you fetch all nodes with creation date < (now - 1 week), and modify the permissions for each of them.
It should be more efficient than the Rule approach explained in my first link, where you need to loop over all nodes on each cron execution. And Rules can be quite more annoying than writing plain PHP. I prefer learning Drupal API than spending hours clicking in Rules interface (Rules is great, but it's hard).
Good luck
Yes you should be able to get this to work using the Rules module to implement what you're looking for, but I recommend you to also combine that with the Rules Once per Day and the Views Rules modules, as further explained below.
Step 1: Rules Event
Your question doesn't really specify anything that could/should be used as the Rules Event (for the rule to be triggered. And even though it's like "up to your own imagination" (any Rules Event will do), something that will work for sure is to use the Rules Once per Day module. Here is how it works (as per the comment in issue 2495775, from the module owner):
You specify a trigger hour on the administration settings page for this module.
The Rule trigger will then run when cron tasks are first run after the start of that hour. The actual run time will depend on your cron task timings.
So this is another way to understand/read this:
The "Event" will only be triggered when a cron job is run.
And that event will only be triggered 1 time / day, i.e. "next time cron runs after the trigger hour has passed".
Step 2: Rules Actions (and optional events)
Some details about the Views Rules module (from its project page):
Provides Views directly as Rules actions and loops to seamlessly use view result data.
The previous quote may seem a bit cryptic (it may make you think like "so what, how can this help me?"). Therefor some more details about how to move forward using these modules:
Create a view (using Views) so that you have 1 Views result (row) with all the nodes (of at least 1 week old) you want to be processed, whereas that view has fields (columns) for whatever is needed in subsequent steps, eg the node ID, but possibly other fields as well. You'll need these View fields later on as values to be processed by your rule, "to change the content access permissions (using the content_access module) so that only users with particular role are able to see such nodes" (similar to what you mentioned in your question). Important: use a Views display type of "Rules".
Create a custom rule in which you use the Views Rules module to iterate over each of these Views results in a Rules action, using the Rules technique known as a "Rules Loop".
For each iteration step in your Rules loop, perform a Rules Action to "do your thing" (= change the content access permissions). At that point you'll have all data from each column of your Views results available as so called Rules Parameters. So at that point it's a piece of cake to adapt the content access permissions for the node you're processing in that loop.
Optionally, you may also want to add whatever extra Rules Condition(s), also up to your own imagination.
Easy, no?

How to call .xqy page from other .xqy page in Marklogic?

Can I call a .xqy page from another .xqy page in Marklogic ?
There are several ways to execute another .xqy, but the most obvious is probably by using xdmp:invoke. That calls the .xqy, waits for its results and returns them on the spot in your code. You can also call a single function using the combination xdmp:function and xdmp:apply. You could also mess around with xdmp:eval, but that is usually a last resort.
Another strategy could be to use xdmp:http-get, but then the execution runs in a different transaction, so would always commit. You would also need to know the url of the other .xqy, which need some knowledge about whether, and how url are rewritten in the app server (not by default).
Running other .xqy without waiting for results is also possible with xdmp:spawn. Particularly usefull for dispatching heavy load of for instance content processing. Dispatching batches of 100 to 1000 docs is quite common. Keep an eye on the task queue size though..
HTH!

Classic ASP, application variables, refreshing

I have an application variable which is populated onstart (in this case it is an array). Ideally I need to rebuild this array every 3 hours, what is the best way of going about this?
Thanks, R.
Save the time you last refreshed the variable contents.
On every request, check the current time against the saved time. If there's a three hour difference, lock and refresh the variable.
As long as there are no requests, the variable also needs no refreshing.
If your application variable must remain "in process" with the rest of the site's code, the way suggested by Tomalak may be your only way of achieving this.
However, if it's possible that the application variable could effectively reside "out of process" of the website's ASP code (although still accessible by it), you may be able to utilise a different (and perhaps slightly better) approach.
Please see "ASP 101: Getting Scripts to Run on a Schedule" for the details.
Tomalak's method is effectively Method 1 in the article, whilst Method's 2 & 3 offer different ways of achieving what is effectively something happening on a schedule, and avoid the potentially redundant checking with every HTTP request.

Incrementing resource counter in a RESTful way: PUT vs POST

I have a resource that has a counter. For the sake of example, let's call the resource profile, and the counter is the number of views for that profile.
Per the REST wiki, PUT requests should be used for resource creation or modification, and should be idempotent. That combination is fine if I'm updating, say, the profile's name, because I can issue a PUT request which sets the name to something 1000 times and the result does not change.
For these standard PUT requests, I have browsers do something like:
PUT /profiles/123?property=value&property2=value2
For incrementing a counter, one calls the url like so:
PUT /profiles/123/?counter=views
Each call will result in the counter being incremented. Technically it's an update operation but it violates idempotency.
I'm looking for guidance/best practice. Are you just doing this as a POST?
I think the right answer is to use PATCH. I didn't see anyone else recommending it should be used to atomically increment a counter, but I believe RFC 2068 says it all very well:
The PATCH method is similar to PUT except that the entity contains a
list of differences between the original version of the resource
identified by the Request-URI and the desired content of the resource
after the PATCH action has been applied. The list of differences is
in a format defined by the media type of the entity (e.g.,
"application/diff") and MUST include sufficient information to allow
the server to recreate the changes necessary to convert the original
version of the resource to the desired version.
So, to update profile 123's view count, I would:
PATCH /profiles/123 HTTP/1.1
Host: www.example.com
Content-Type: application/x-counters
views + 1
Where the x-counters media type (which I just made up) is made of multiple lines of field operator scalar tuples. views = 500 or views - 1 or views + 3 are all valid syntactically (but may be forbidden semantically).
I can understand some frowning-upon making up yet another media type, but I humbly suggest it's more correct than the POST / PUT alternative. Making up a resource for a field, complete with its own URI and especially its own details (which I don't really keep, all I have is an integer) sounds wrong and cumbersome to me. What if I have 23 different counters to maintain?
An alternative might be to add another resource to the system to track the viewings of a profile. You might call it "Viewing".
To see all Viewings of a profile:
GET /profiles/123/viewings
To add a viewing to a profile:
POST /profiles/123/viewings #here, you'd submit the details using a custom media type in the request body.
To update an existing Viewing:
PUT /viewings/815 # submit revised attributes of the Viewing in the request body using the custom media type you created.
To drill down into the details of a viewing:
GET /viewings/815
To delete a Viewing:
DELETE /viewings/815
Also, because you're asking for best-practice, be sure your RESTful system is hypertext-driven.
For the most part, there's nothing wrong with using query parameters in URIs - just don't give your clients the idea that they can manipulate them.
Instead, create a media type that embodies the concepts the parameters are trying to model. Give this media type a concise, unambiguous, and descriptive name. Then document this media type. The real problem of exposing query parameters in REST is that the practice often leads out-of-band communication, and therefore increased coupling between client and server.
Then give your system a uniform interface. For example, adding a new resource is always a POST. Updating a resource is always a PUT. Deleting is DELETE, and getiing is GET.
The hardest part about REST is understanding how media types figure into system design (it's also the part that Fielding left out of his dissertation because he ran out of time). If you want a specific example of a hypertext-driven system that uses and doucuments media types, see the Sun Cloud API.
After evaluating the previous answers I decided PATCH was inappropriate and, for my purposes, fiddling around with Content-Type for a trivial task was a violation of the KISS principle. I only needed to increment n+1 so I just did this:
PUT /profiles/123$views
++
Where ++ is the message body and is interpreted by the controller as an instruction to increment the resource by one.
I chose $ to deliminate the field/property of the resource as it is a legal sub-delimiter and, for my purposes, seemed more intuitive than / which, in my opinion, has the vibe of traversability.
I think both approaches of Yanic and Rich are interresting. A PATCH does not need to be safe or indempotent but can be in order to be more robust against concurrency. Rich's solution is certainly easier to use in a "standard" REST API.
See RFC5789:
PATCH is neither safe nor idempotent as defined by [RFC2616], Section
9.1.
A PATCH request can be issued in such a way as to be idempotent,
which also helps prevent bad outcomes from collisions between two
PATCH requests on the same resource in a similar time frame.
Collisions from multiple PATCH requests may be more dangerous than
PUT collisions because some patch formats need to operate from a
known base-point or else they will corrupt the resource.

Resources