I've run into a problem and I need a hand.
This is more of a theoretical approach rather than practical.
My problem is:
I am making an app in my spare time, which is rather complex - it's supposed to resemble a social network of sorts to an extent.
I have 10 reducers in my combineReducers function and a crapload of actions fire up when a user either goes to their dashboard or the start/home page - to fetch the user's uploaded images, videos, their profile picture, posts of their friends, posts directed to them, all of their posts together regardless of whether they are images, videos, audio posts or textual and groups they're a part of.
However, some of the reducers affect others, and simply reset them, but they were supposed to act independently of one another.
In the images below, you can clearly see that some textual posts were loaded in one of the dispatched actions, only to be completely reset by another action from a completely different reducer with a different type constant when loading videos.
In case it matters, some of my separate reducer objects share the property loading, initially set to null or false.
The maintainer on GitHub might have also been stumped, as they recommended that I ask my question here.
The reducer loads
Different reducer randomly resets another one
Related
I have a custom build page where users can filter products based on price, category, brand, ...
These are made out of checkboxes and a range input for the price.
I'm trying to figure out what the best way would be to track every action/filter in order to find out which brand / categories are the most popular.
Important to know
The menu contains a submenu for the categories. When the user clicks one of these links the filterpage will have this category checked in the filters.
The page does not reload when applying a filter. I'm using JS to perform a search and show new results. The page url gets updated with the correct search query parameters.
I think I have 2 options:
Track click events on the checkboxes and send every change with datalayer.push.
Track the page URL after each filter.
Option 1 is an issue because people might go to the page with some parameters in the URL. This won't be tracked because there was no click event. This issue will also apply to users that click the category in the submenu that prefills the filter.
Option 2 also is an issue because with this solution the category might be tracked 5 times if the user keeps adding or removing other filters. It always tracks all filters instead of the one that has been added.
The first step of tracking is using the analog of Occam's Razor. You want to cut off stuff that has no chance of answering legit business questions.
Your business question here is: What filters are the most helpful for the users? Now it's important to know why the business wants to know it. Cuz remember, the business is not very competent at data analysis even if it doesn't realize it.
So you need to know exactly how answering that question improves OKRs/KPIs. In this case, the legit answer could be: cuz we want to sort the filters by the usage frequency and measure if that would ease the engagement and thus, improve the conversion rate for the part of the journey from the product list to the pdp
That's a pretty weak reason, but passable. Especially if there's an issue in that transition currently.
Good, now having that context, why would we want to track filters used in pre-populated urls? Say some overzealous employee made a mistake and pre-populated some weird unneeded filter using, say, date and time of when the product has been added. And now they use that URL in all ads, so you get a lot of third party traffic coming to product lists with a date as a filter.
And then, let's say, that employee keeps using that filter for other persistent links to the effect of the date/time filter becoming uncanningly popular. There. Your data slowly becomes garbage and stops answering the original question.
There are other issues with tracking pre-set filters, some of which you've outlined, but the real issue is the ability of the data to answer good business questions clearly. Tracking all filters may be able to answer some technical questions, but it's not the aim of behavioral analytics to answer technical questions. Let them use access logs and whatever else they use to answer those.
I put my homepage through Google's PageSpeed test and it gave me a score of 69 for Mobile and 95 for Desktop. The one and only issue being a Render Blocking CSS.
Now, all my web pages on my website are Above the Fold. i.e. There is no scroll involved anywhere. Given this, personally I feel I should not be doing anything special, since the CSS is required to view my page the way I designed it, right from the get go.
If I do asynchronous loading or something, it'll end up showing the content on a black and white un-organised page, just before the intended output.
Do I ignore Google? It would mean that I'd never score 100/100, and wouldn't that affect my SEO chances?
TL;DR — No, you don't have to. But in most cases, it helps, indirectly.
Render blocking is in place to prevent FOUC.
Ideally you should only load the CSS responsible for rendering the "above the fold" of your page as render blocking and all the rest of your styles using async methods.
However, most sites load all their CSS as render blocking. Why? Because most websites do not afford a CSS specialist to customize their CSS loading for their specific case. They'll sometimes pay for a theme, but that's it.
Themes are not typically optimized from this point of view because there is no way to know what elements the user will want in their above the fold area.
Is this a huge problem?
NO.
First of all, all of this is only about when the user loads the very first page of your website. All the other pages will use the cached stylesheets (already loaded on first page visited) (unless you load different stylesheets for different pages).
And second of all, the general idea that Google lowers your page's SEO score for having render-blocking CSS is, technically, wrong. They do penalize for a lot of other reasons (like accessibility, readability and responsiveness issues) but not for having render-blocking CSS.
However, there is an indirect correlation between the two.
Google Page Speed is a tool telling you how you can improve the loading speed of your page or to leave the impression the page loads faster.
if you fix the problems it identifies, the page will load faster or at least it will seem to load faster
if your page is or feels faster, there are less chances users will hit the back button while waiting for your page to load.
THIS user behavior is where the SEO penalty comes in. Google registers any such behavior as a general "user did not find what he was looking for on that website" and lowers the page's SEO score for whatever the user searched for
Any method of keeping users from hitting the back button in the first 30 seconds after they left for your website (that will keep the bounce rate down) is a good method to fight SEO penalties.
And... it's true: one of the most efficient methods is to make your page load faster.
Others include:
make the loading process look professional (place correctly sized placeholders for images, so the page doesn't jump around when loading);
keep FOUC as close to 0 as possible
render something, rather than nothing
if possible, give users a general idea of how much of the page has loaded (in %)
make the website loadup with some basic schema of what's on longer pages. users will read the schema, trying to figure out if they're on the right page and they won't notice the loading time - since you give them something to do while waiting
cut the "bla bla" and try to be honest about whatever your page has to offer
I can't emphasize this enough: it really pays off to be honest. There is a huge difference in results, SEO wise:
If your page is about A, but you want to show this to users looking for B, do not tell them you've got B and don't hide it from them. Just tell them:
"Look, this is not B, it's A, but here are a few reasons why you should consider A instead of B."
Most users will read those reasons. Especially if they're well written, they address real problems, and they don't look like they're just trying to buy time.
A very good idea is to place your strongest argument second or third in the list (second if first is rather long, third if first two are not so long).
The reasoning is: if you place it further down, many users don't read past three weak arguments - they label the entire list as unconvincing and go back.
Also, if you place the arguments in the order of their importance, the user will realize it and, as soon as they reach two arguments that are not convincing, they'll assume it gets worse further down the list and, again, they'll hit back button.
But if you place a second or third argument stronger than the previous ones, they will read through the entire list hoping to find another one.
Now, if your arguments are compelling, the user will go for A instead of B => Win.
If not, they will still go for B, but at least they'll do it later (after they read your reasons), and the penalty will be much smaller, if any (the longer time a user spends on your page, the less the penalty, should they press back) => No loss.
If you can keep the user occupied for more than 30 seconds, you're typically in the clear SEO wise. And that's the really important SEO issue at hand, not render-blocking per-se.
In the end, it is totally possible to create a page with a very low score on Page Speed while having a very high SEO score. It's unlikely, but totally possible.
I have recently created an application where a lot of data is loaded into objects when the application starts up, and other data as it is required. For example if the user requests the catalogue page then it will load all the top level category data into objects of type Category. This will then stay there to be used by other users (who will therefore not have to load this data into objects) and can be altered by admin if they happen to login during the same application instance. I know this is not the most efficient solution, as pointed out below, but it works and the page load, at the moment, is not too long. It is very quick if most of the required data is already loaded into objects. It is also tailored to the business' needs - unlike other techniques such as Linq-to-SQL.
The problem I am facing is when a page is requested which requires lots of data to be displayed about different types of object. For example when a catalogue page is requested which displays information on a product which can be bought, it then loads all the products and categories (as the products make reference to the category object, not just the category name).
I would like to display a loading symbol with a message whilst all this data is being loaded into objects, so the user knows its not just in a loop or anything. Is there any way to do this? I am open to using JS / jQuery if I need to.
Thanks in advance.
Regards,
Richard
PS I am working on ways to make it more efficient - such as using HashTables or HashMaps. However this is taking time as there are so many different types of item (News, Events, Catalogue Item - Range, Collection, Design, RangeCollection, CollectionDesign, RangeCollectionDesign and RangeDesign - Users, PageViews and the list goes on).
Please correct me if I'm wrong, but I do believe that Javascript is required in order to display a "loading" image... Using server-side scriping alone would typically require an entire page load after all the content loads unless you want to start messing with IFrames.
This is a job for AJAX. A common solution to your problem is to have a small page that displays a loading icon. The page has some JavaScript that makes additional HTTP requests to the server to download the rest of the page. JQuery has a "$.ajax" method that is designed to simplify this process.
I would suggest looking at the documentation to the .ajax method in the jQuery documentation. Unfortunately, it seems to be a rather delicate process to get all the scripting code right and it takes a while to learn it all.
My web-app records users via webcam and microphone. I want to use HTML/JS for the controls and content, so I created two separate Flex modules:
* A "Webcam Setup" module that lets you choose your camera and mic input devices
* A "record" module that lets the user record and submit the recording
When I embed either of these on the page, since they access the user's Camera/Mic object, Flash shows the Privacy dialog that says "[mysite] is requesting access to your camera and microphone. If you click Allow, you may be recorded."
The problem is, if I answer Yes in the Setup module, and later add the Record module to the page using Javascript, it again shows the Privacy dialog.
Is there a way to avoid the second privacy popup?
I would think that saying "Yes" for [mysite] would store that permission for at least that session, but apparently not.
What I've tried
I tried combining them into one SWF, adding it to the page once and moving the DOM element with jQuery's append() function when needed. When I move it, however, it reloads and asks me again.
Imagine if [mysite] was, say, blogger.com or livejournal.com (or, if it were still around, geocities.com). Would you want a "yes" response on that site to be good for every page under that domain?
Rememeber, just because you promise (cross your heart & hope to die) not to abuse the security hole you request, doesn't mean they can allow you to have that security hole.
Eventually, I found a usable workaround, similar to what I originally tried (above).
I combined the setup and record modules into one SWF. I first show the setup screen. When the user hits the Continue button on my page, Javascript calls a function in the SWF to swap to the Record screen.
I then move the <div> containing the Flash object to another location on page using absolute positioning, and resize the object.
Previously, I was trying to use jQuery's append() function to move the div within the DOM, and that was causing the SWF to reload. Just changing position and size does actually work.
You could build the "record" component to simply send and receive signals using an API you've created for your "setup" component (which has already been authorized, meaning one auth & two swfs) by using the LocalConnection class:
http://livedocs.adobe.com/flex/3/langref/flash/net/LocalConnection.html
This seems far closer to best practice than the other implementations mentioned, which smell a bit hacky and would probably confuse anyone who may inherit the codebase in the future.
I have a grid with several thousand rows that can be filtered and sorted. On each row you can click a details button, which will bring you a new page with detailed information about the page. Because this is a button, you can't middle click or right click and open in a new tab. In addition, when clicking back you lose your filters and search results.
To solve this problem, I considered the following: Switch the buttons to links, and when filtering and searching, use get instead of post requests. This way, you could switch to new pages with a right click or middle click, and if you did follow a link normally, back would work properly.
This change was not made however. We were asked add a 'next result / previous result' set of buttons on the details page, that would allow you to navigate. While not an elegant solution, it would at least work.
I proposed adding querystring parameters to the details page, that would regenerate the search query based on filter, and allow you to get the next and previous results in code.
A team member took issue with this solution. He believes that it is a waste of server resources to re-query the database. Instead, a solution was proposed to add session variable that included a list of results. You could then use that to navigate.
I took issue with that because you can't have multiple tabs open without breaking navigation, and new results aren't appended to the list in real time. Also, if you worried about optimization, session would be the last thing to use since it eats memory and prevents server replication... unless you store the results back in the database.
What's the best solution?
Session doesn't sound like a winner, won't scale with lots of users.
Hitting the database repeatedly does seem unnecessary, but it depends on the cost - how many users, how often would they refresh/filter and what is the cost of that query?
If you do use querystrings you could cache the pages by parameter.
What about some AJAX code on that button to retrieve details - leave the underlying grid in place and display details in a div/panel or a new window/tab.