I created a Chrome Extension as a solution to override the helpText bubbles in SalesForce Console pages. The helpText bubbles show up the text without the ability to link URLs. It looks like this:
The extension is taking the helpText bubble (which in the SalesForce console window, is inside an iFrame) and makes the URL click-able. It also adds word wrap and marks the links in blue.
The solution works fine when the page loads with the initial iFrame (or iFrames) on it, meaning when you open the SalesForce console the first time (https://eu3.salesforce.com/console).
When a new tab is created at the SalesForce console, my inject script doesn't run.
Can you please assist in understanding how to inject the script on each and every new Tab SalesForce Console is creating?
The Extension as follows:
manifest.js:
{
"browser_action": {
"default_icon": "icons/icon16.png"
},
"content_scripts": [ {
"all_frames": true,
"js": [ "js/jquery/jquery.js", "src/inject/inject.js" ],
"matches": [ "https://*.salesforce.com/*", "http://*.salesforce.com/*" ]
} ],
"default_locale": "en",
"description": "This extension Fix SalesForce help bubbles",
"icons": {
"128": "icons/icon128.png",
"16": "icons/icon16.png",
"48": "icons/icon48.png"
},
"manifest_version": 2,
"name": "--Fix SalesForce bubble text--",
"permissions": [ "https://*.salesforce.com/*", "http://*.salesforce.com/*" ],
"update_url": "https://clients2.google.com/service/update2/crx",
"version": "5"
}
And this is the inject.js:
chrome.extension.sendMessage({}, function(response) {
var readyStateCheckInterval = setInterval(function() {
if (document.readyState === "complete") {
clearInterval(readyStateCheckInterval);
var frame = jQuery('#servicedesk iframe.x-border-panel');
frame = frame.contents();
function linkify(inputText) {
var replacedText, replacePattern1, replacePattern2, replacePattern3;
var originalText = inputText;
//URLs starting with http://, https://, file:// or ftp://
replacePattern1 = /(\b(https?|ftp|file):\/\/[-A-Z0-9+&##\/%?=~_|!:,.;]*[-A-Z0-9+&##\/%=~_|])/gim;
replacedText = inputText.replace(replacePattern1, '$1');
//URLs starting with "www." (without // before it, or it'd re-link the ones done above).
replacePattern2 = /(^|[^\/f])(www\.[\S]+(\b|$))/gim;
replacedText = replacedText.replace(replacePattern2, '$1$2');
//Change email addresses to mailto:: links.
replacePattern3 = /(([a-zA-Z0-9\-\_\.])+#[a-zA-Z\_]+?(\.[a-zA-Z]{2,6})+)/gim;
replacedText = replacedText.replace(replacePattern3, '$1');
//If there are hrefs in the original text, let's split
// the text up and only work on the parts that don't have urls yet.
var count = originalText.match(/<a href/g) || [];
if(count.length > 0){
var combinedReplacedText;
//Keep delimiter when splitting
var splitInput = originalText.split(/(<\/a>)/g);
for (i = 0 ; i < splitInput.length ; i++){
if(splitInput[i].match(/<a href/g) == null){
splitInput[i] = splitInput[i].replace(replacePattern1, '$1').replace(replacePattern2, '$1$2').replace(replacePattern3, '$1');
}
}
combinedReplacedText = splitInput.join('');
return combinedReplacedText;
} else {
return replacedText;
}
}
var helpOrbReady = setInterval(function() {
var helpOrb = frame.find('.helpOrb');
if (helpOrb) {
clearInterval(helpOrbReady)
} else {
return;
}
helpOrb.on('mouseout', function(event) {
event.stopPropagation();
event.preventDefault();
setTimeout(function() {
var helpText = frame.find('.helpText')
helpText.css('display', 'block');
helpText.css('opacity', '1');
helpText.css('word-wrap', 'break-word');
var text = helpText.html()
text = text.substr(text.indexOf('http'))
text = text.substr(0, text.indexOf(' '))
var newHtml = helpText.html()
helpText.html(linkify(newHtml))
}, 500); });
}, 1000);
}
}, 1000);
});
It is possible (I have not tested it, but it sounds plausible from a few questions I've seen here) that Chrome does not automatically inject manifest-specified code into newly-created <iframe> elements.
In that case, you will have to use a background script to re-inject your script:
chrome.runtime.onMessage.addListener( function(request, sender, sendResponse) {
if(request.reinject) {
chrome.tabs.executeScript(
sender.tab.id,
{ file: "js/jquery/jquery.js", "all_frames": true },
function(){
chrome.tabs.executeScript(
sender.tab.id,
{ file: "js/inject/inject.js", "all_frames": true }
);
}
);
});
Content script:
// Before everything: include guard, ensure injected only once
if(injected) return;
var injected = true;
function onNewIframe(){
chrome.runtime.sendMessage({reinject: true});
}
Now, I have many questions about your code, which are not directly related to your question.
Why the pointless sendMessage wrapper? No-one is even listening, so your code basically returns with an error set.
Why all the intervals? Use events instead of polling.
If you are waiting on document to become ready, jQuery offers $(document).ready(...)
If you're waiting on DOM modifications, learn to use DOM Mutation Observers, as documented and as outlined here or here. This would be, by the way, the preferred way to call onNewIframe().
Related
I am moving from Leaflet to Mapbox GL and have some data issues. My webApi is proven but I cannot smoothly integrate them.
The approach I gave up on, based upon their examples and my own research, looks like:
map = new mapboxgl.Map({
container: 'mapdiv',
style: 'mapbox://styles/mapbox/streets-v10'
, center: start
, zoom: $scope.zoom
, transformRequest: (url, resourceType) => {
if (resourceType === 'Source' && url.startsWith(CONFIG.API_URL)) {
return {
headers: {
'Authorization': 'Bearer ' + localStorageService.get("authorizationData")
, 'Access-Control-Allow-Origin': CONFIG.APP_URL
, 'Access-Control-Allow-Credentials': 'true'
}
}
}
}
});
This is passing my OAuth2 token (or at least I think it should be) and the Cross site scripting part CORS.
Accompanying the above with:
map.addSource(layerName, { type: 'geojson', url: getLayerURL($scope.remLayers[i]) });
map.getSource(layerName).setData(getLayerURL($scope.remLayers[i]));
Having also tried to no avail:
map.addSource(layerName, { "type": 'geojson', "data": { "type": "FeatureCollection", "features": [] }});
map.getSource(layerName).setData(getLayerURL($scope.remLayers[i]));
Although there are no errors Fiddler does not show any requests being made to my layer webApi. All the others show but Mapbox does not appear to raising them.
The Url looks like:
http://localhost:49198/api/layer/?bbox=36.686654090881355,34.72821077223763,36.74072742462159,34.73664000652042&dtype=l&id=cf0e1df7-9510-4d03-9319-d4a1a7d6646d&sessionId=9a7d7daf-76fc-4dd8-af4f-b55d341e60e4
Because this was not working I attempted to make it more manual using my existing $http calls which partially works.
map = new mapboxgl.Map({
container: 'mapdiv',
style: 'mapbox://styles/mapbox/streets-v10'
, center: start
, zoom: $scope.zoom
, transformRequest: (url, resourceType) => {
if (resourceType === 'Source' && url.startsWith(CONFIG.API_URL)) {
return {
headers: {
'Authorization': 'Bearer ' + localStorageService.get("authorizationData")
}
}
}
}
});
map.addSource(layerName,
{
"type": 'geojson',
"data": { "type": "FeatureCollection", "features": [] }
});
The tricky part is to know when to run the data retrieval call. The only place I could find was on the maps data event which now looks like:
map.on('data', function (e) {
if (e.dataType === 'source' && e.isSourceLoaded === false && e.tile === undefined) {
// See if the datasource is known
for (var i = 0; i < $scope.remLayers.length; i++) {
if (e.sourceId === $scope.remLayers[i].name) {
askForData(i)
}
}
}
});
function askForData(i) {
var data = getBBoxString(map);
var mapZoomLevel = map.getZoom();
if (checkZoom(mapZoomLevel, $scope.remLayers[i].minZoom, $scope.remLayers[i].maxZoom)) {
mapWebSvr.getData({
bbox: data, dtype: 0, id: $scope.remLayers[i].id, buffer: $scope.remLayers[i].isBuffer, sessionId
},
function (data, indexValue, indexType) {
showNewData(data, indexValue, indexType);
},
function () {
// Not done yet.
},
i,
0
);
}
}
function showNewData(ajxresponse, index, indexType) {
map.getSource($scope.remLayers[index].name).setData(ajxresponse);
map.getSource($scope.remLayers[index].name).isSourceLoaded = true;
}
This is all working with one exception. It keeps firing time and time again. Some of these calls return a lot of data for a web call so its not a solution at the moment.
Its like its never satisfied with the data even though its showing it on the map!
There is a parameter on the data event, isSourceLoaded but it does not get set to true.
I have searched for an example, have tried setting isSourceLoaded in a number of places (as with the code above) but to no avail.
Does anyone have a method accomplishing this basic data retrieval function successfully or can point out the error(s) in my code? Or even point me to a working example...
I have spent too long on this now and could do with some help.
After a bit of a run around I have a solution.
A Mapbox email pointed to populating the data in the load event - which I am now doing.
This was not however the solution I was looking for as the data needs refreshing when the map moves, zooms etc - further look ups are required.
Following a bit more a examination a solution was found.
Using the code blow on the render event will request the information when the bounding box is changed.
var renderStaticBounds = getBoundsString(map.getBounds());
map.on('render', function (e) {
if (renderStaticBounds != getBoundsString(map.getBounds())) {
renderStaticBounds = getBoundsString(map.getBounds());
for (var i = 0; i < $scope.remLayers.length; i++) {
askForData(i);
}
}
});
function getBoundsString(mapBounds) {
var left = mapBounds._sw.lng;
var bottom = mapBounds._sw.lat;
var right = mapBounds._ne.lng;
var top = mapBounds._ne.lat;
return left + ',' + bottom + ',' + right + ',' + top;
}
This hopefully will save someone some development time.
I am trying to scrape all links of special kind (boxscore-links) from this website http://www.basketball-reference.com/teams/GSW/2016_games.html and then visit them one by one, scraping some information from every visited link. For a beginning I want to scrape all links, visit them one by one and get a title of website. The problem is that it always prints the same title and the same current url (initial url) even though it clearly has to be a new one. Seems to me that there is a problem with 'this'-keyword...
(Don't look at limit of links, I took the code from sample on github of casperjs and I left it for console not to be overloaded.)
This is my code:
var casper = require("casper").create({
verbose: true
});
// The base links array
var links = [ "http://www.basketball-reference.com/teams/GSW/2016_games.html" ];
// If we don't set a limit, it could go on forever
var upTo = ~~casper.cli.get(0) || 10;
var currentLink = 0;
// Get the links, and add them to the links array
function addLinks(link) {
this.then(function() {
var found = this.evaluate(searchLinks);
this.echo(found.length + " links found on " + link);
links = links.concat(found);
});
}
// Fetch all <a> elements from the page and return
// the ones which contains a href starting with 'http://'
function searchLinks() {
var links = document.querySelectorAll('#teams_games td:nth-child(5) a');
return Array.prototype.map.call(links, function(e) {
return e.getAttribute('href');
});
}
// Just opens the page and prints the title
function start(link) {
this.start(link, function() {
this.wait(5000, function() {
this.echo('Page title: ' + this.getTitle());
this.echo('Current url: ' + this.getCurrentUrl());
});
});
}
// As long as it has a next link, and is under the maximum limit, will keep running
function check() {
if (links[currentLink] && currentLink < upTo) {
this.echo('--- Link ' + currentLink + ' ---');
start.call(this, links[currentLink]);
addLinks.call(this, links[currentLink]);
currentLink++;
this.run(check);
} else {
this.echo("All done.");
this.exit();
}
}
casper.start().then(function() {
this.echo("Starting");
});
casper.run(check);
Considering an array of URLs, you can iterate over them, visiting each in succession with something like the following:
casper.each(urls, function(self, url) {
self.thenOpen(url, function(){
this.echo('Opening: ' + url);
// Do Whatever
});
});
Obviously this will not find links on a page, but it is a nice way to go over a known set of URLs.
This is a bit puzzling to me. I set data in the router (which I'm using very simply intentionally at this stage of my project), as follows :
Router.route('/groups/:_id',function() {
this.render('groupPage', {
data : function() {
return Groups.findOne({_id : this.params._id});
}
}, { sort : {time: -1} } );
});
The data you would expect, is now available in the template helpers, but if I have a look at 'this' in the rendered function its null
Template.groupPage.rendered = function() {
console.log(this);
};
I'd love to understand why (presuming its an expected result), or If its something I'm doing / not doing that causes this?
From my experience, this isn't uncommon. Below is how I handle it in my routes.
From what I understand, the template gets rendered client-side while the client is subscribing, so the null is actually what data is available.
Once the client recieves data from the subscription (server), it is added to the collection which causes the template to re-render.
Below is the pattern I use for routes. Notice the if(!this.ready()) return;
which handles the no data situation.
Router.route('landing', {
path: '/b/:b/:brandId/:template',
onAfterAction: function() {
if (this.title) document.title = this.title;
},
data: function() {
if(!this.ready()) return;
var brand = Brands.findOne(this.params.brandId);
if (!brand) return false;
this.title = brand.title;
return brand;
},
waitOn: function() {
return [
Meteor.subscribe('landingPageByBrandId', this.params.brandId),
Meteor.subscribe('myProfile'), // For verification
];
},
});
Issue
I was experiencing this myself today. I believe that there is a race condition between the Template.rendered callback and the iron router data function. I have since raised a question as an IronRouter issue on github to deal with the core issue.
In the meantime, workarounds:
Option 1: Wrap your code in a window.setTimeout()
Template.groupPage.rendered = function() {
var data_context = this.data;
window.setTimeout(function() {
console.log(data_context);
}, 100);
};
Option 2: Wrap your code in a this.autorun()
Template.groupPage.rendered = function() {
var data_context = this.data;
this.autorun(function() {
console.log(data_context);
});
};
Note: in this option, the function will run every time that the template's data context changes! The autorun will be destroyed along with the template though, unlike Tracker.autorun calls.
I would like to switch to an iframe using pure phantom.js code
Here is my first attempt
var page = new WebPage();
var url = 'http://www.theurltofectch'
page.open(url, function (status) {
if ('success' !== status) {
console.log("Error");
} else {
page.switchToFrame("thenameoftheiframe");
console.log(page.content);
phantom.exit();
}
});
It produces only the source code of the main page. Any idea ?
Notice that the iframe domain is different from the main page domain.
Please give this a try I believe it may be an async issues meaning the iframe is not present when trying to access it. I received the below snippet from another post.
var page = require('webpage').create(),
testindex = 0,
loadInProgress = false;
page.onConsoleMessage = function(msg) {
console.log(msg);
};
page.onLoadStarted = function() {
loadInProgress = true;
console.log("load started");
};
page.onLoadFinished = function() {
loadInProgress = false;
console.log("load finished");
};
/*
page.onNavigationRequested = function(url, type, willNavigate, main) {
console.log('Trying to navigate to: ' + url);
console.log('Caused by: ' + type);
console.log('Will actually navigate: ' + willNavigate);
console.log('Sent from the page\'s main frame: ' + main);
};
*/
/*
The steps array represents a finite set of steps in order to perform the unit test
*/
var steps = [
function() {
//Load Login Page
page.open("https://www.yourpage.com");
},
function() {
//access your iframe here
page.evaluate(function() {
});
},
function() {
//any other step you want
page.evaluate(function() {
});
},
function() {
// Output content of page to stdout after form has been submitted
page.evaluate(function() {
//console.log(document.querySelectorAll('html')[0].outerHTML);
});
//render a test image to see if login passed
page.render('test.png');
}
];
interval = setInterval(function() {
if (!loadInProgress && typeof steps[testindex] === "function") {
console.log("step " + (testindex + 1));
steps[testindex]();
testindex++;
}
if (typeof steps[testindex] !== "function") {
console.log("test complete!");
phantom.exit();
}
}, 50);
replace
console.log(page.content);
with
console.log(page.frameContent);
Should return the contents of the frame phantomjs switched to.
If the iframe is from another domain you may need to add the --web-security=no option like this:
phantomjs --web-security=no myscript.js
As an additional information, what xMythicx said could be true. Some iframes are rendered via Javascript after page finishes loading. If the iframe contents are empty, then you will need to wait for all resources to finish loading, before you start grabbing stuff from the page. But this is another issue, if you need an answer on this, I suggest you ask a new question about it, and I will answer there.
Had the same problem for iframes and
phantomjs --web-security=no
helped in my case :]
I've been pouring over this for hours and I've yet to make much headway so I was hoping one of the wonderful denizens of SO could help me out. Here's the problem...
I'm implementing a tree via the jstree plugin for jQuery. I'm pulling the data with which I populate the tree programatically from our webapp via json dumped into an asp:HiddenField, basically like this:
JavaScriptSerializer serializer = new JavaScriptSerializer();
string json = serializer.Serialize(Items);
json = json.ToLower();
data.Value = json;
Then, the tree pulls the json from the hidden field to build itself. This works perfectly fine up until I try to persist data for which nodes are selected/opened. To simplify my problem I've hardcoded some json data into the tree and attempted to use the cookie plugin to persist the tree state data. This does not work for whatever reason. I've seen other issues where people need to load the plugins in a specific order, etc, this did not solve my issue. I tried the same setup with html_data and it works perfectly. With this working persistence I converted the cookie plugin to persist the data in a different asp:hiddenfield (we can't use cookies for this type of thing in our application.)
essentially the cookie operations are identical, it just saves the array of nodes as the value of a hidden field. This works with the html_data, still not with the json and I have yet to be able to put my finger on where it's failing.
This is the jQuery.cookie.js replacement:
jQuery.persist = function(name, value) {
if (typeof value != 'undefined') { // name and value given, set persist
if (value === null) {
value = '';
}
jQuery('#' + name).attr('value', value);
} else { // only name given, get value
var persistValue = null;
persistValue = jQuery('#' + name).attr('value');
return persistValue;
}
};
The jstree.cookie.js code is identical save for a few variable name changes.
And this is my tree:
$(function() {
$("#demo1").jstree({
"json_data": {
"data" : [
{
"data" : "A node",
"children" : [ "Child 1", "Child 2" ]
},
{
"attr": { "id": "li.node.id" },
"data" : {
"title": "li.node.id",
"attr": { "href": "#" }
},
"children": ["Child 1", "Child 2"]
}
]
},
"persistence": {
"save_opened": "<%= open.ClientID %>",
"save_selected": "<%= select.ClientID %>",
"auto_save": true
},
"plugins": ["themes", "ui", "persistence", "json_data"]
});
});
The data -is- being stored appropriately in the hiddenfields, the problem occurs on a postback, it does not reopen the nodes. Any help would be greatly appreciated.
After looking through this some more, I just wanted to explain that it appears to me that the issue is that the tree has not yet been built from the JSON_data when the persistence operations are being attempted. Is there any way to postpone these actions until after the tree is fully loaded?
If anyone is still attempting to perform the same type of operation on a jsTree version 3.0+ there is an easier way to accomplish the same type of functionality, without editing any of the jsTree's core JavaScript, and without relying on the "state" plugin (Version 1.0 - "Persistence"):
var jsTreeControl = $("#jsTreeControl");
//Can be a "asp:HiddenField"
var stateJSONControl = $("#stateJSONControl");
var url = "exampleURL";
jsTreeControl.jstree({
'core': {
"data": function (node, cb) {
var thisVar = this;
//On the initial load, if the "state" already exists in the hidden value
//then simply use that rather than make a AJAX call
if (stateJSONControl.val() !== "" && node.id === "#") {
cb.call(thisVar, { d: JSON.parse(stateJSONControl.val()) });
}
else {
$.ajax({
type: "POST",
url: url,
async: true,
success: function (json) {
cb.call(thisVar, json);
},
contentType: "application/json; charset=utf-8",
dataType: "json"
}).responseText;
}
}
}
});
//If the user changes the jsTree, save the full JSON of the jsTree into the hidden value,
//this will then be restored on postback by the "data" function in the jsTree decleration
jsTreeControl.on("changed.jstree", function (e, data) {
if (typeof (data.node) != 'undefined') {
stateJSONControl.val(JSON.stringify(jsTreeControl.jstree(true).get_json()));
}
});
This code will create a jsTree and save it's "state" into a hidden value, then upon postback when the jsTree is recreated, it will use its old "state" restored from the "HiddenField" rather than make a new AJAX call and lose the expansions/selections that the user has made.
Got it working properly with JSON data. I had to edit the "reopen" and "reselect" functions inside jstree itself.
Here's the new functioning reopen function for anyone who needs it.
reopen: function(is_callback) {
var _this = this,
done = true,
current = [],
remaining = [];
if (!is_callback) { this.data.core.reopen = false; this.data.core.refreshing = true; }
if (this.data.core.to_open.length) {
$.each(this.data.core.to_open, function(i, val) {
val = val.replace(/^#/, "")
if (val == "#") { return true; }
if ($(("li[id=" + val + "]")).length && $(("li[id=" + val + "]")).is(".jstree-closed")) { current.push($(("li[id=" + val + "]"))); }
else { remaining.push(val); }
});
if (current.length) {
this.data.core.to_open = remaining;
$.each(current, function(i, val) {
_this.open_node(val, function() { _this.reopen(true); }, true);
});
done = false;
}
}
if (done) {
// TODO: find a more elegant approach to syncronizing returning requests
if (this.data.core.reopen) { clearTimeout(this.data.core.reopen); }
this.data.core.reopen = setTimeout(function() { _this.__callback({}, _this); }, 50);
this.data.core.refreshing = false;
}
},
The problem was that it was trying to find the element by a custom attribute. It was just pushing these strings into the array to search when it was expecting node objects. Using this line
if ($(("li[id=" + val + "]")).length && $(("li[id=" + val + "]")).is(".jstree-closed")) { current.push($(("li[id=" + val + "]"))); }
instead of
if ($(val).length && $(val).is(".jstree-closed")) { current.push(val); }
was all it took. Using a similar process I was able to persist the selected nodes this way as well.
Hope this is of help to someone.