I am building a Drupal 8 site and am new to the twig templating engine. For one specific content type I would like to make a call to an external restful api and render some of the returned data as fields in the twig template.
I have an internal id to call out to the API and I would like to embed in the template:
The api call
set a number of variables from the call
render the result (with some logic if it does not exist)
Is this something that is easy to do with twig and drupal 8?
As a secondary question, is this secure?
The alternative at this stage is to write small Drupal 8 module but as there is no user input on the page, just rendering from the returned api call, I thought it would be easier to have it all in one place.
In Drupal 7 it was possible, but a poor design, to put arbitrary PHP into the template. In Drupal 8 it was made hard to do intentionally. You should not attempt to execute arbitrary PHP in your Twig files or make remote API calls that late in the processing of a request.
You should call the API and gather the data before you reach twig. You should create a custom module that handles that API interaction and places the response in a field, block, or another structure for rendering in the appropriate context (often a custom block works well for things like this, but exactly which approach makes the most sense depends on your project). You should also keep in mind that any page requiring a remote API call is likely to be slow unless that API call is very simple and very very fast. The BigPipe module can help you address those kinds of speed issues, but involve an additional learning curve.
If you want the browser to handle the API call, you will want to create a div (or similar markup) to place the results, and attach the JavaScript to the structure and make the actual API call after most of the page load is complete.
As for security: it is as secure as you make it. Drupal will provide some help to avoid the most common security mistakes, but you can still do things that would make it insecure (like sharing data with an untrusted third party or assuming the response data is always safe).
Related
This seems like a really simple thing to do, yet I am having trouble finding the right architecture to do this.
Here's the scenario:
We have an API route api/templates that should, in theory, happen in every single route/page of the App. It fetches all the different templates and all the data in the app belongs to one of those templates. These are dynamic and can change over time, so they are not an 'importable JSON'
Every page should get these assets on load, but...
once it's loaded, and you start navigating through pages, the app should NOT re-fetch them on every single page navigation
We will implement a socket notification to alert an already-loaded client when templates change in the database
The problem is that, since this is needed on every page, SSR still needs to be able to access this on every page and our SEO policy requires server side rendering to send these pages fully rendered to client.
So, what we are looking for is:
to have a somewhat 'conditional' getServerSideProps that, if it is a full reload, it fetches that, but, if it is already in the client's memory, it skips that
we have looked into SWR, which, in theory, would work, but it still makes the API call as an after-thought, helping on the client side, but defeating the objective of not actually making the call, so that the backend is not 'burdened' with an unnecessary call
Honestly, this looks like a very 'common' pattern, yet I have completely failed to achieve a proper solution within the NextJS app environment. Maybe it's an "anti-pattern" and we shouldn't be doing this?
I'm using Solr 4.10.2 and Drupal 7.X, I have the Apache Solr Module Framework operating and sending the requests to Solr From Drupal. Currently when we perform a search, Drupal builds the query and sends it to Solr. Solr just executes the query and returns the results without using it's internal handlers which can be configured through SolrConfig.xml.
I would like to know if there is a way to just send the searched terms (without building a query) from Drupal and let Solr use the internal handlers declared in SolrConfig.xml to handle the request, build the query and then return the data?
The reason for this is we have been working on trying to boost some results when we perform a search (we want exact match first & fuzzy search results after) by changing the "weight" of some fields.
We know that from Back Office we can use the "Bias" function to boost some fields but this is too limited for what we are trying to achieve.
We also know we can change the query sent from Drupal directly from code side by using hook_apachesolr_modify_query() but we prefer changing as little code as possible and using the SolrConfig.xml /handlers which we already have configured to return the results as we want.
Ok, we figured out how to do this:
In order to choose the handler that is being used by Solr while sending a request from Drupal, we have to edit the "hook_apachesolr_query_alter" function and add the following code:
$query->addParam(‘qt’, ‘MyHandlerName’);
We did some extra coding to allow us to change the Handler directly from back office in order to be able to switch handlers without touching the code.
I have an application of type asp.net mvc and web api.
I m little bit confused over http post and http put.
When to use what and what is the pros and cons of each.
I have gone through many blogs but no solid reason what is designed for what.
Use POST where you would have to create completely new record from scratch.
Use PUT where you would have to update existed record in your database
Here are Differences between PUT & POST
`POST is Not idempotent`-->
Means running POST operation again and again will create new instance everytime when you run call it.
`PUT is Idempotent`-->
PUT is Idempotent operation calling PUT again and again will result same result.
So POST is not idempotent while PUT is idempotent.
`There is also PATCH` -->
Use patch when you would have to update only few properties of your model.In other words Partial Updates.
Put simply (no pun intended):
POST is usually used to CREATE new objects.
PUT is usually used to UPDATE existing objects
Using the correct HTTP verbs allows you to publish a cleaner API and negates the need for encoding intent within the endpoint (url). For example, compare:
Using the correct verbs:
GET api/user/12345
POST api/user/12345
PUT api/user/12345
DELETE api/user/12345
Hacking the endpoint:
GET api/user/12345
POST api/user/12345/create
POST api/user/12345/update
POST api/user/12345/delete
I think the only Cons of using PUT etc are that not all developers are familiar with them and some third party software may not support them or at least it may not be as easy as using the more familiar verbs like GET & POST.
For example, I had a problem a few weeks ago when a proxy was placed in front of an API just before it was to go live and the proxy didn't support the HTTP PUT verb (maybe a config issue - but we didn't have access to the proxy to fix it) so we had to tweak the API and change it to POST at the last minute (which also meant we had to change the clients (mobile apps) that were using it).
I was searching for information about one of my doubts, but I couldn't find any. I'm working in an ASP.NET site and using AJAX to require data, since I'm currently working on my own, I don't know web programming's best practices.
I usually get all the information I need from the server and use Javascript to display / Modify it and AJAX to send it back to the server. A friend of mine uses PHP for most part of the programming, He rarelly uses any javascript and he told me it's way faster this way, since it does not consume the client's resources.
The basic question actually is:
According to the best practices, is it better for the server just to provide the data needed for the
application or is better you use the server for more than this?
That is going to depend on the expected amount of traffic for the site, the amount of content being generated, and the expectations of the end-user.
In a high-traffic site, it is actually "faster" for the end-user if you let javascript generate a portion of the content on the client side. Also, you can deliver a better user experience with long load times through client side scripting than you can if the content is loaded completely on the server.
In most cases you would need at least some backend code. E.g. when validating user input or when retrieving information from a real persistent database. Or what about when somebody has javascript disabled in his user-agent or somebody with a screenreader or searchengine crawlers?
IMHO you should at least (again in most cases) have the backend code which is able to do all the work and spit out a full webpage to the client. In addition to this you can add javascript functionality to make the user interface "smoother" by for example validating user data before submitting it to the server (remember to ALWAYS also check on the serverside) or by loading partial html (AJAX).
The point about being faster or using less resources when doing it serverside doesn't make much sense. Even if it does that it doesn't matter (but again I highly doubt this statement). If you use clientside scripting to only load parts that are needed it would rather use less resources on both the client- and the serverside.
I am creating a MVC 3 application (although just as applicable to other technologies e.g. ASP.NET Forms) and was just wondering if it is feasible (performance wise) to serve images from code rather than using the direct virtual path (like usual).
The idea is that I improve the common method of serving files to:
Apply security checks
Standardised method of serving files based on route values
Returning modified images (if requested) e.g. different dimentions (ok this would only be used sparingly so don't relate this to the performance question above).
Perform business logic before allowing access to the resource
I know HOW to do it but I don't know IF I should do it.
What are the performance issues (if any)
Does something weird happen e.g. images only load sequentially (maybe that's how HTML does it currently i am not sure - exposing my ignorance here).
Anything else you can think of.
Hope this all makes sense!
Thanks,
Dan.
UPDATE
OK - lets get specific:
What are the performance implications for using this type of method for serving all images in MVC 3 using a memory stream? Note: the image url would be GenericFetchImage/image1 (and just for simplicity - all my images are jpegs).
public FileStreamResult GenericFetchImage(string RouteValueRefToImage)
{
// Create a new memory stream object
MemoryStream ms = new MemoryStream();
// Go get image from file location
ms = GetImageAndPutIntoMemoryStream(RouteValueRefToImage);
// return the output as a file
return new FileStreamResult(ms, "image/jpeg");
}
I know that this method works, because I am using it to dynamically generate an image based on a session value for a captcha image. It's pretty neat - but I would like to use this method for all image retrieval.
I guess I am wondering in the above example if this is ok to do or whether it requires more processing to perform and if so, how much? For example, if the number of visitors were to multiply by 1000 for example, would the server be then processingly burdened in the delivery of images..
THANKS!
A similar question was asked before (Can an ASP.Net MVC controller return an Image?) and it appears that the performance implications are very small to serving images out of actions vs directly. As the accepted answer noted, the difference appears to be on the order of a millisecond (in that test case, about 13%). You could re-run the test locally and see what the difference is on your hardware.
The best answer to your question of if you should be using it is from this answer to (another) similar question (emphasis mine):
DO worry about the following: you will need to re-implement a caching strategy on the server, since IIS manages that for static files requested directly. You will also need to make sure you manage your client-side caching with the correct headers included in the response. Ultimately, just ask yourself if re-inventing a method of serving static files from a server is something that serves your application's needs.
To address the specific cases you provided with the question:
Apply security checks
You can already do this using the IIS 7 integrated pipeline. Relevant bit from documentation:
Allowing services provided by both native and managed modules to apply to all requests, regardless of handler. For example, managed Forms Authentication can be used for all content, including ASP pages, CGIs, and static files.
Standardised method of serving files based on route values
If I'm reading the documentation correctly you can insert a module early enough in the pipeline to re-write incoming URLs to point directly to static resources and let IIS handle the request from there. (For the sake of completeness there also this related question regarding mapping routes to mages: How do I route images using ASP.Net MVC routing?)
Empowering ASP.NET components to provide functionality that was previously unavailable to them due to their placement in the server pipeline. For example, a managed module providing request rewriting functionality can rewrite the request prior to any server processing, including authentication.
There are also some pretty powerful URL rewrite features that come with IIS more or less out of the box.
Returning modified images (if requested) e.g. different dimentions (ok this would only be used sparingly so don't relate this to the performance question above).
It looks like a module that does this is already available for IIS. Not sure if that would fall under serving images from code or not though, I guess it might.
Perform business logic before allowing access to the resource
If you're performing business logic to generate said resources (like a chart) or as you mentioned a captcha image then yeah, you basically have no choice but to do it this way.