Hi I am using jQuery load to grab a ahref from a link and then I want to load a div from the page im getting into a div so have tried this:
// lets load the contents
$('#navigationLinks a:not(:first-child)').click(function(event){
$('#wrapper').animate({
'left':'0px'
});
var href = $('#navigationLinks a').attr('href');
$('#content').load(href + ' #resultDiv');
event.preventDefault();
});
This is the HTML:
<div id="navigationLinks">
Dashboard Home
Industry Overview
Regions
Industries
Security Pipeline
Audit Events & Issues
Account Filter
Contractual vs. Delivered Services
</div>
I tried removing the space in ' #resultDiv' before the # but that didn't help, any help would be greatly appreciated.
You should try this
$(document).ready(function(){
$('#navigationLinks a:not(:first-child)').click(function(e){
var href = e.target;
$('#content').load(href + ' #resultDiv');
e.preventDefault();
});
});
The problem was that var href = $('#navigationLinks a').attr('href'); will always get the first link in the block and not the actually link that was clicked.
Related
(premise: this is a pretty noob question)
Hi guys,
I am try to code a good ajax navigation/pagination for WordPress.
Currently I am trying to append the new page articles instead of replacing the old ones.
In archive.php I've replaced <?php the_posts_navigation(); ?> with
<nav class="navigation posts-navigation" role="navigation">
<div class="nav-links">
<div class="nav-previous">
<?php next_posts_link(); ?>
</div>
</div>
</nav>
since I want to display only the "Next page" link (which I will style in a button later).
And in my js file I have
$('.nav-previous a').live('click', function(e){
e.preventDefault();
$('.navigation').remove(); // Removing the navigation link at the bottom of the first articles
var link = $(this).attr('href');
$.get( link, function(data) {
var articles = $(data).find('.archive-container');
$('#primary').append(articles);
});
});
It is not that clear to me how to implement history handling in a context like this. I'd like to give users the possibility to scroll up to previous results when clicking on the back button and to keep the results when someone clicks on an article and then goes back to the results.
Now, if I use something like window.history.pushState('', '', link); at the end of the click function above, it changes the URL when pushing the link to see the next articles. And this is correct. But, if I click on an article (e.g. of /page/2) to see it and then I click on the back button, it shows only the results of the page containing that article (that is /page/2 in my example). I'd like to show again all the articles the user saw before leaving the archive page instead.
At the moment I'm also working with window.localStorage to reach the goal, but I'd like to understand if it could be feasible only with history and how to do it.
console.log(window.location.href);
let loc = window.location.href.slice(0, -1);
let ppos = loc.indexOf("page/");
if((ppos >= 0)) {
let page = parseInt(loc.slice(loc.lastIndexOf('/') + 1, loc.length)) + 1;
loc = loc.slice(0, ppos) + 'page/' + page + '/';
} else {
loc += '/page/2/';
}
console.log(loc);
window.history.pushState('', '', loc);
I have been asked to extract info by an academic colleague from a website where I need to link the content of a webpage in a table - not too hard with the contents of a text file which is only reacheable (as far as I can tell) by clicking on a javascript link... e.g.
<a id="tk1" href="javascript:__doPostBack('tk1$ContentPlaceHolder1$grid$tk$OpenFileButton','')">
The table is conveniently inside a table with id='tk1' which is nice... but how do I follow the link which pulls the text file.
Ideally I'd like to do this in R... I can grab the relevant table in text format by saying
u <- the url of interest...
library(XML)
tables = readHTMLTable(u)
interestingTable <- tables[grep('tk1', names(tables))]
And this will give the text in the table, but how do I grab the html for that particular table? and how do I "click" on the button and get the text file behind it?
I note that there is a form with massive hidden values - the site appears to be asp.net driven and uses impenetrable URLs.
Many thanks!
This is somewhat tricky, and not fully integrated in R, but some system()-fiddling will get you started.
Download and install phantom javascript: http://code.google.com/p/phantomjs/
Check the short script on http://menne-biomed.de/uni/JavaButton.html, which emulates your case. When you click the javascript anchor, it redirects http://cran.at.r-project.org/ via doPostBack(inaccessibleJavascriptVar).
Save the following script locally as javabutton.js
var page = new WebPage();
page.open('http://www.menne-biomed.de/uni/JavaButton.html', function (status) {
if (status !== 'success') {
console.log('Unable to access network');
} else {
var ua = page.evaluate(function () {
var t = document.getElementById('tk1').href;
var re = new RegExp('\((.*)\)');
return eval(re.exec(t)[1]);
});
console.log(ua);// Outputs http://cran.at.r-project.org/
}
phantom.exit();
});
With phantomjs on path, call
phantomjs javabutton.js
The link will be displayed on the console. Use any method to get it into Rcurl.
Not elegant, but maybe someones wraps phantomjs into R one day. In case the link to JaveButton.html should be lost, here it is as code.
<!DOCTYPE html >
<head>
<script>
inaccesibleJavascriptVar = 'http://' + 'cran.at.r-project.org/';
function doPostBack(myref)
{
window.location.href= myref;
return false;
}
</script>
</head>
<body>
<a id="tk1" href="javascript:doPostBack(inaccesibleJavascriptVar)" >Click here</a>
</body>
</html>
Have a look at the RCurl package:
http://www.omegahat.org/RCurl/
Thanks for reading. I have some codes on my wordpress site, the first one adds an overlay over an image with a color, the article title and a link to go to the project. The second code adds an ajax pagination using jQuery.
The thing is that i have my projects with images and the jquery overlay owrking perfect, but when they click on the previous projects link that calls the ajax pagination, the jquery overlay stops working.
I have been trying different options, but maybe i'm not on the correct way to solve it. Does anyone has a clue?
Thanks in advance.
The codes:
// PORTFOLIO HOVER EFFECT
jQuery('ul.portfolio-thumbs li').hover(function(){
jQuery(".overlay", this).stop().animate({top:'0px'},{queue:false,duration:300});
}, function() {
jQuery(".overlay", this).stop().animate({top:'190px'},{queue:false,duration:300});
});
// POSTS NAVIGATION
jQuery('#posts-navigation a').live('click', function(e){
e.preventDefault();
var link = jQuery(this).attr('href');
jQuery('#ajax-container').fadeOut(500).load(link + ' #ajax-inner', function(){ jQuery('#ajax-container').fadeIn(500); });
});
I've found the solution in the same day and #BrockAdams helped me with the doubts. I'm putting here the code because it can be helpful for someone.
jQuery('ul.portfolio-thumbs li').live('hover', function(event){
if (event.type == 'mouseenter') {
jQuery(".overlay", this).stop().animate({top:'0px'},{queue:false,duration:300});
} else {
jQuery(".overlay", this).stop().animate({top:'190px'},{queue:false,duration:300});
}
});
jQuery('#posts-navigation a').live('click', function(e){
e.preventDefault();
var link = jQuery(this).attr('href');
jQuery('#ajax-container').fadeOut(500).load(link + ' #ajax-inner', function(){ jQuery('#ajax-container').fadeIn(500); });
});
You can post answers to your own question.
And, you needed to use live()Doc on the hover, because the pagination presumably loads in new portfolio-thumbs lis.
Without the live(), these new lis would have no events attached to them (unless you re-called jQuery('ul.portfolio-thumbs li').hover after every pagination event).
Live is easier, and avoids the pitfall of having multiple copies of the same event-listener attached to an element.
And, yes, you can use both live() calls (or more) on the same page without problems.
In Google's documentation it is said that an event can be tracked in the following way:
<a onclick="_gaq.push(['_trackEvent', 'category', 'action', 'opt_label', opt_value]);">click me</a>
or older version:
<a onclick="pageTracker._trackEvent('category', 'action', 'opt_label', opt_value);">click me</a>
I was looking with Firebug to the request that are made when a click on a link and I see there aborted request:
http://www.google-analytics.com/__utm.gif?utmwv=4.7.2&utmn=907737223&....
This happens because browser unload all javascript when user navigates to a new page. How in this case event tracking is performed?
Edit:
Since one picture can be worth a thousand words...
When I click a link firebug shows me this sequence of requests (here are shown first four, after follows requests to fill page content)
The problem is that there isn't enough time for the script to finish running before the user is taken to the next page. What you can do is create a wrapper function for your GA code and in the onclick, call the wrapper function and after the GA code is triggered in your wrapper function, set a time out and update location.href with the link's url. Example:
click me
<script type='text/javascript'>
function wrapper_function(that,category,action,opt_label,opt_value) {
_gaq.push(['_trackEvent', category, action, opt_label, opt_value]);
window.setTimeout("window.location.href='" + that.href + "'", 1000);
}
</script>
code will vary a bit based on your link but hopefully you get the idea - basically it waits a little bit before taking the user to the target url to give the script some time to execute.
Update:
This answer was posted several years ago and quite a lot has happened since then, yet I continue to get feedback (and upvotes) occasionally, so I thought I'd update this answer with new info. This answer is still doable but if you are using Universal Analytics then there is a hitCallback function available. The hitCallback function is also available to their traditional _gaq (ga.js) but it's not officially documented.
This problem is answered in Google's documentation:
use
<script type="text/javascript">
function recordOutboundLink(link, category, action) {
try {
var myTracker=_gat._getTrackerByName();
_gaq.push(['myTracker._trackEvent', ' + category + ', ' + action + ']);
setTimeout('document.location = "' + link.href + '"', 100)
}catch(err){}
}
</script>
or
<script type="text/javascript">
function recordOutboundLink(link, category, action) {
try {
var pageTracker=_gat._getTracker("UA-XXXXX-X");
pageTracker._trackEvent(category, action);
setTimeout('document.location = "' + link.href + '"', 100)
}catch(err){}
}
</script>
This more or less the same as the answer from Crayon Violet, but has a nicer method name and is the official solution recommended by Google.
As above, this is due to the page being unloaded prior to the Async call returning. If you want to implement a small delay to allow gaq to sync, I would suggest the following:
First add a link and add an extra class or data attribute:
My Link
Then add into your Javascript:
$("a[data-track-exit]").on('click', function(e) {
e.preventDefault();
var thatEl = $(this);
thatEl.unbind(e.type, arguments.callee);
_gaq.push( [ "_trackEvent", action, e.type, 'label', 1 ] );
setTimeout(function() {
thatEl.trigger(event);
}, 200);
});
I don't really condone this behavior (e.g. if you are going to another page on your site, try to capture the data on that page), but it is a decent stop-gap. This can be extrapolated not just for click events, but also form submits and anything else that would also cause a page unload. Hope this helps!
I had the same issue. Try this one, it works for me. Looks like that ga doesnt like numbers as a label value. So, convert it to string.
trackEvent: function(category, action, opt_label, opt_value){
if(typeof opt_label === 'undefined') opt_label = '';
if(typeof opt_value === 'undefined') opt_value = 1;
_gaq.push([
'_trackEvent',
String(category),
String(action),
String(opt_label),
opt_value
]);
}
I have written some code using jQuery to use Ajax to get data from another WebForm, and it works fine. I'm copying the code to another project, but it won't work properly. When a class member is clicked, it will give me the ProductID that I have concatenated onto the input ID, but it never alerts the data from the $.get. The test page (/Products/Ajax/Default.aspx) that I have set up simply returns the text "TESTING...". I installed Web Development Helper in IE, and it shows that the request is getting to the test page and that the status is 200 with my correct return text. However, jQuery refreshes my calling page before it will ever show me the data that I'm asking for. Below are the code snippets from my page. Please let me know if there are other code blocks that you need to see. Thank you!
<script type="text/javascript">
$(document).ready(function() {
$(".addtocart_a").click(function() {
var sProdIDFileID = $(this).attr("id");
var aProdIDFileID = sProdIDFileID.split("_");
var sProdID = aProdIDFileID[5];
// *** This alert shows fine -- ProdID: 7
alert("ProdID: " + sProdID);
$.get("/Products/Ajax/Default.aspx", { test: "yes" }, function(data) {
// *** This alert never gets displayed
alert("Data Loaded: " + data);
}, "text");
});
});
</script>
<input src="/images/add_to_cart.png" name="ctl00$ctl00$ContentPlaceHolder1$ContentPlaceHolder1$aAddToCart_7" type="image" id="ctl00_ctl00_ContentPlaceHolder1_ContentPlaceHolder1_aAddToCart_7" class="addtocart_a" />
The easiest way is to tell jQuery not to return anything.
$(".addtocart_a").click(function(e){
// REST OF FUNCTION
return false;
});
Good luck! If you need anything else let me know.