Read/write data with blocks on Boost cached pages - drupal

I have a module that supplies a block. The block is set to BLOCK_NO_CACHE, and its content is pulled from a function. It lets a site admin create a 'message' to display on the site, kind of like CNN, where a breaking update is displayed at the top, and a user can close it by hitting X. When they close it, the action is written and UUID written to their cookie so they don't see that message again.
I am getting reports from Boost users that when someone closes a message, it closes it for everyone. I assume this is because Boost is caching the page and serving a cached page after someone closed the message.
How can I make my module work for people using Boost?
I thought maybe hook_boot might work, but, then again I am not sure if there is a better way to address this.

hook_boot will not do it. Once that page is in the cache no PHP is run. You need to have that block be loaded via AJAX because the state of that block is dependent upon a cookie.
http://drupal.org/project/ajaxblocks and http://drupal.org/project/ajaxify_regions
are 2 projects that easily do this.
Also it would be hard to get breaking updates out if the page is cached. You will have similar issues for varnish users as well.

Related

How to avoid repeated API call across an entire NextJS application?

This seems like a really simple thing to do, yet I am having trouble finding the right architecture to do this.
Here's the scenario:
We have an API route api/templates that should, in theory, happen in every single route/page of the App. It fetches all the different templates and all the data in the app belongs to one of those templates. These are dynamic and can change over time, so they are not an 'importable JSON'
Every page should get these assets on load, but...
once it's loaded, and you start navigating through pages, the app should NOT re-fetch them on every single page navigation
We will implement a socket notification to alert an already-loaded client when templates change in the database
The problem is that, since this is needed on every page, SSR still needs to be able to access this on every page and our SEO policy requires server side rendering to send these pages fully rendered to client.
So, what we are looking for is:
to have a somewhat 'conditional' getServerSideProps that, if it is a full reload, it fetches that, but, if it is already in the client's memory, it skips that
we have looked into SWR, which, in theory, would work, but it still makes the API call as an after-thought, helping on the client side, but defeating the objective of not actually making the call, so that the backend is not 'burdened' with an unnecessary call
Honestly, this looks like a very 'common' pattern, yet I have completely failed to achieve a proper solution within the NextJS app environment. Maybe it's an "anti-pattern" and we shouldn't be doing this?

Web App in Flaky Internet connection

Have PHP/mySQL/JS-JQuery based web site that records finish times for racers, then sends the time back to the server. The server inserts the finish time in the db, Calculates the finish place based on a handicapping formula. Stores that and send the finish place back to the web page and it is updated on the screen.
It uses Jquery Ajax calls so the page doesn't get reloaded at all.
Everything works fine if the data connection is good.
If the data connection is bad my first version of this page would put a message up that the connection was bad.
Now I am trying to make it a bit smarter, so I have started with the HTML5 feature that tells the browser if it is on or offline(i realize this may not be the best way yet but it works for concept testing)
When a new finish time is recorded(or updated) and we are offline the JS just adds a class of notSent to the tag of the finish time. The finish place and all of the finish places would normally come from the sever are greyed out indicating the data is no longer valid(until it can communicate with the server).
When the browser finds itself back online, A simple jQuery each loop on each notSent class starts re-sending the AJAX requests and if they all get completed it processes the return finish place information and display it as up to date.
It also disables all external links on the page when the browser is offline. This keeps the user from losing the data entry page by accident by clicking a link that will give them a page not found button.
So my last issue, is the browsers reload and close buttons, if the user click these when it is offline they will lose the data entry screen and are out of luck until the connection comes back.
Can I disable these functions as well? A quick Stack-overflow search of this indicates it can be done but most answers give the old, "you really shouldn't and if you think you need to you should rethink your design." warning.
So rethinking my design I start learning about;
HTML 5 local storage (decide I don't need it, since my data is stored already in a input box)
App-cache Manifest for controlling the cache of the page so if reloaded in the browser off line if would get that cached version. After much reading came to the conclusion that this could work on a static page but not mine where the data is updated all the time. Then found that most browsers are deprecating this anyways.
Service Workers seems to be the possible future for contorlling offline caching, but not all browsers support it, it is pretty cumbersome to learn and still very new.
Now I am stuck, Leaning towards preventing browser reloads and defering learning service worker till more support and better examples for a dynamic content pages like mine.
Bottom line- am I missing something here? Is there a easy solution?
I think the best option is to use PouchDB to sync between the client and server and use Background Sync to awake a Service Worker when you regain connectivity. If Service Worker is not present in your browser, it can sync the next time your user open the browser.
You have a similar example of deferred requests explained in the Service Worker Cookbook,

asp.net: Is there a way to "disable" the browser's back button after loggin out?

Is there a way to "disable" the browser's back button after loggin out?
I've read several posts and now I know, that I can disable caching.
( e.g. ASP.NET authentication login and logout with browser back button )
This is working, but I want to disable the back button for security reasons only after logging out (= when there's no Session available anymore).
If I disable caching, the user cannot use the browser's back button while logged in.
I'm using a custom authentication, not the standard of asp.net
Is there a secure (= no javascript) possibility to do this?
As I'm sure you already know, you can't directly disable the "back" button on a browser.
The only methods for preventing a user from going back rely on either setting the page to cache, or involve the use of javascript. Based on the fact that neither of these work for you, there isn't a solution to manage this. I've looked at many articles over the years, and re-searched this several times, and all of the suggestions either use client-side script or the cache.
My best suggestion in your case is to use the cache disable method, and look at how your UI responds to the "back" button and see if there are changes you can make to the design to make it smoother. This may involve checking the session variables, or checking to see if the user is still authenticated, but given your requirements, I believe you're out of luck.
In short, you're going to need to choose the lesser of two evils.
Using the page cache will ensure that people can't use the "back" button at all without using javascript - presumably better security
Using the javascript to delete page history on logout will allow you to prevent users from going back after logged out, but someon ewith noscript turned on, or someone malicious can disable your control.
You didn't specify exactly who you are trying to protect, and from what, but if I'm guessing right, and you're concerned about the user who leaves their PC after logging out, but without closing the browser window, then is the Javascript really a concern?
Thinking it through,the type of person who would do this isn't thinking about how the info can be used maliciously. Someone who is malicious, presumably, is already "thinking like a bad guy" and knows enough to close the browser window.
Either option could be bypassed via malware that intercepts/alters the http headers, javascript, etc, so neither is really 100% effective. The only difference I see is that the javascript option can be broken both by altering the html as it travels across the wire (using something like Fiddler or malware) AND by simply having Javascript disabled. so the page cache option is marginally better for security purposes.
Using https instead of plain http offers a lot more protection in combination with the header method, making it much more effective, because it greatly increases the difficulty of manipulating the data across the wire, and it's not disabled simply by disabling JavaScript.
Either way, I think you need to weigh your options and choose one or the other. As sad as it seems, we can only do so much to protect the users from themselves.
Anything running on the browser can be intercepted and/or disabled. Any page sent to the client-side can be saved: any link URLs, content, javascript, etc.
If you let me load a webpage in a browser on my machine, I can view and save the source of every page I see, and capture every piece of communication to and from the server. If you want HTML to render or javascript to run on my machine, I get to see it and keep it forever if i want.
You could control this by only permitting access to your application through a remote connection to a controlled machine where the application runs, but for a consumer app, this is probably prohibitive.
you could however, discourage most users with something like this
function logout(){
window.open(loggedOutPageURL)
self.close ()
}
a malicious user will have no problem disabling this javascript, or just saving content as he receives it, but this might be the best you can do.
abandon method for dispose the session on logout & check Session!=null on your Data or dashboard page then I think there is not need to disable Back button.
write the script below logout button:
<script language="text/javascript">
window.history.forward(1);
</script>

Scraping ASP.NET with Python and urllib2

I've been trying (unsuccessfully, I might add) to scrape a website created with the Microsoft stack (ASP.NET, C#, IIS) using Python and urllib/urllib2. I'm also using cookielib to manage cookies. After spending a long time profiling the website in Chrome and examining the headers, I've been unable to come up with a working solution to log in. Currently, in an attempt to get it to work at the most basic level, I've hard-coded the encoded URL string with all of the appropriate form data (even View State, etc..). I'm also passing valid headers.
The response that I'm currently receiving reads:
29|pageRedirect||/?aspxerrorpath=/default.aspx|
I'm not sure how to interpret the above. Also, I've looked pretty extensively at the client-side code used in processing the login fields.
Here's how it works: You enter your username/pass and hit a 'Login' button. Pressing the Enter key also simulates this button press. The input fields aren't in a form. Instead, there's a few onClick events on said Login button (most of which are just for aesthetics), but one in question handles validation. It does some rudimentary checks before sending it off to the server-side. Based on the web resources, it definitely appears to be using .NET AJAX.
When logging into this website normally, you request the domian as a POST with form-data of your username and password, among other things. Then, there is some sort of URL rewrite or redirect that takes you to a content page of url.com/twitter. When attempting to access url.com/twitter directly, it redirects you to the main page.
I should note that I've decided to leave the URL in question out. I'm not doing anything malicious, just automating a very monotonous check once every reasonable increment of time (I'm familiar with compassionate screen scraping). However, it would be trivial to associate my StackOverflow account with that account in the event that it didn't make the domain owners happy.
My question is: I've been able to successfully log in and automate services in the past, none of which were .NET-based. Is there anything different that I should be doing, or maybe something I'm leaving out?
For anyone else that might be in a similar predicament in the future:
I'd just like to note that I've had a lot of success with a Greasemonkey user script in Chrome to do all of my scraping and automation. I found it to be a lot easier than Python + urllib2 (at least for this particular case). The user scripts are written in 100% Javascript.
When scraping a web application, I use either:
1) WireShark ... or...
2) A logging proxy server (that logs headers as well as payload)
I then compare what the real application does (in this case, how your browser interacts with the site) with the scraper's logs. Working through the differences will bring you to a working solution.

How do I force expiration of an ASP.Net session when a user leaves the site?

We have a scenario in which we like to detect when the user has left our site and immediately expire their .Net session. We're using Forms Authentication. We're not talking about a session timeout, which we already have. We would like to know when a user has browsed away from our site, either via a link, by typing in an address or following a bookmark. If they return to our site, even if right away, they will have to log back in (I understand this is not great usability - this is a security requirement we've been given by our client).
My initial instinct is that this is either not possible, or that any solutions will be extremely unreliable. The only solutions we've come up with are:
Add a JavaScript onBlur event handler that tells the server to log out the session when the user leaves the site.
Once the user has logged in, check the HTTP referrer to ensure that the user has navigated from within the site.
Add AJAX polling back to the server to keep the session refreshed, possibly on a 10-second interval. When the call isn't received on time the session would end.
The onBlur seems like the easiest, but possibly least reliable method - I'm not sure if it would even work. There are also issues with the referrer method, as the user could type in an address within the site and not follow a link. The AJAX method seems like it would work, but it's complicated - I'm not even sure how to handle it on the back-end. I'm thinking there might also be scenarios in which that wouldn't always work.
Any ideas would be appreciated. Thanks.
I have gone for a heartbeat type scenario like you describe above. Either Ajax Polling or an IFRAME. When the user closes the browser and a certain timeout elapses (10 seconds?), then you can log them out.
Another alternative would be to have the site run entirely on AJAX. Thus there is only one "URL" that a user can visit and all content is loaded dynamically. Of course you break all sorts of usability stuff this way, but at least you achieve your goal.
If the user closes their browser, or types in a different URL (including selecting a favourite) there is not much for you to detect.
For links on your site, you could create links that forward via your site (i.e. rather than linking to http://example.com/foo you link to http://mysite.com/forwarder?dest=http://example.com/foo).
Just be careful to only forward to sites you intend to, otherwise you can open up security issues with "universal forwarding" being used for phishing etc..
You absolutely, positively need to tell the client that this is not possible. They are having a basic misunderstanding of how the Web works. Be diplomatic, obviously... hell, it's probably someone else's job... but it needs to be done.
Your suggestions, or a combination of them, may work in a simple proof-of-concept... but they will bring you nothing but support nightmares and will not work consistently enough. Worse, you will undoubtably also create situations where users cannot use the application at all due to the security hacks misfiring on them.
Javascript has an onUnload event, which is triggered when the browser is told to leave the page. You can see this on StackOverflow when you try to press the back button or click a link while editing an answer.
You may use this event to trigger an auto-logoff for your site.
I am unsure, however, if this will handle cases wherein the browser is deliberately closed or the browser process externally terminated (I'm guessing it doesn't happen in the 2nd case).
If all navigation within your site is done through .NET postbacks (no simple html links or javascript open statements), you can do automatic logoff and redirect to the login page if the page load is not a postback. This does not end the session on exit, but it looks like it because it enforces a login if manually navigating to your web app. To get this functionality for all pages, you can use a Master page that does this in the Page_Load.
private void Page_Load(object sender, System.EventArgs e)
{
if (!IsPostBack)
{
System.Web.Security.FormsAuthentication.SignOut();
System.Web.Security.FormsAuthentication.RedirectToLoginPage();
}
}

Resources