Specifically, how does it manage to serve different versions of the same site, with no access to the server or anything, just a script on the head?
The way all client-side testing platforms work is by applying the changes by executing JS on the top of the existing HTML of the page.
Basically, these platforms provide WYSIWYG editor that allows you to make the changes on any site. These changes can range from simple changes like color/text/layout to more complex changes where you can modify the HTML content of any element altogether.
Every change done via visual editor generates a corresponding JS code that will get executed on the fly when someone participates in one of the variants.
To summarize, the flow will be:
Inside the platform
Place the JS snippet of the platform on the site(should be inside the head tag to avoid any flickering).
Create the test and the variants in the platform using the visual editor or by writing your own code inside the code editor.
Run the test.
On the website
The user visits the site and the respective platform's JS snippet executes.
Snippet connects to the nearest CDN and brings back the test configuration along with the platform's library.
The library executes quickly and applies the changes to the respective elements by firing the JS snippet generated during the variant creation.
The library sends a hit to track the user along with variant info inside the platform reporting.
You will get the stats in real-time and will get to know which variant performed the best.
Related
Recently installed the project noted here: https://developer.salesforce.com/blogs/2020/11/how-to-use-apex-natively-with-svelte-vue-and-preact-within-lwc.html to test a theory on using Preact in a Lightning Web Component. Observed that in the Preact component any click in the component fires the onclick function for the first element rendered in the component (with an onclick property), any additional clicks or clicking directly on other elements (with or without an onclick properties) only fire the function for the first element. This behavior tracks with a separate project I've been working on that includes Preact. Does anyone know what would cause this and/or have suggestions on ways to address?
I'm assuming this is related to the LWC wrapper and how it redirects browser events to be processed, but I'm out of my depth in terms of fully debugging that path.
I ran this by the author of the linked blog post and we confirmed this doesn't work in an actual org, though it works fine in a local dev sandbox. Likely culprit is locker service, but neither the author nor I was willing to try to verify that, and there wouldn't be a whole lot to be done about it even if it were confirmed.
Short answer, Preact doesn't work in LWC framework currently.
I was just about to set up a 2nd GA property that I would implement into my Staging environment. I figured i'd do the same with GTM and just export/import containers from Stage to Production whenever necessary. I also figured I'd dynamically populate the Tracking-ID dynamically based on hostname. No big deal.
But then I stumbled across Environments for GTM. The first bit I read said that using this feature would solve the problem of moving code across environments. To me this implied that the snippet code would remain the same in all environments and that there would be no need to change (dynamically, via build script, manually or otherwise) any values or anything... that GTA was smart enough to deploy the right container(s) to the right place(s) at the right time(s). That sounds great, I'll do it.
Now that I'm getting into that process I'm learning (if I'm understanding correctly) that each environment does in face have to have a separate snippet. So now I"m back to where I started, with having to dynamically add values to the snippets based on domain name (which determines stage or test). With out that, every time the file containing the snippet is pushed between environments, it will contain the wrong values. I guess using Environments still takes out the export/import process for containers (which, don't get me wrong, is nice) but having to change those values is a pain..
Is this the long and short of it - do I have this right? Is there any way around having to change code in the web page (or template) by doing it somehow through GTM instead? I'm guessing not, since the snippet is the base of GTM's functionality, but i figure I'd ask.
Further complicating things is that I was planning to use a Wordpress plugin, Google Tag Manager for Wordpress, to add the GTM code. in this case, all I can even change is is the Tracking-ID, which actually stays the same... it's other values that change that I have no control over with the plugin. Is anyone aware of a way to inject new values into the snippet that the plugin writes to the page?
The snippet for an environment has the same GTM id, but has a token for the environment name attached to the GTM url. If you use any kind of build system it should be possible to set or change the token according to the server you deploy to. Personally I am not convinced that environments are really useful.
If all you need is different values for tracking ids, you can implement lookup table variables that take hostname variable as input and return the respective tracking id for live or staging. Then use that instead of hardcoding the tracking id into your tag.
I am trying to inject op:tags in my reactjs App. I came across https://github.com/nfl/react-helmet and it dynamically inject the tags ion my index.html header juts like i wanted it. The problem is, it injects the tags at the end of the head and thus was not recognised by facebook debugger here. It works when the ogen graph tags appear right in the beginning of the header before the script tags. With reac-helmet however, it injects them at the extreme end. Please how do i best fix this ? I am trying to have article preview on social media and this is failing just because of the arrangement. Any help would be appreciated.
well, I don't think it is because of the arrangement.
As far as I remember FB doesn't execute javascript code in the provided URL.
Facebook’s scraper just looks at the HTML code of your page; it’s not a full-fledged “browser” that would execute any client site code.
with that being said.whatever meta tags you need there it can't be done via JS on the client-side. it must be server-side rendered.
I am not sure what technology you are using to serve this app, but I can assume it is a react app. and it would be easy to handle this via a small express server. that serves the app with the right meta tags in place even.
I'd like to know the amount of data that is going over the wire when someone is first opening my Meteor app.
Pingdom is useful but I'd like something I can run locally on my own machine.
Ideally I'd also like to see a breakdown per package so I can decide on whether I want to keep or ditch a specific package.
You can just use your browser's developer tools. For example, in Chrome, open the developer tools (right click -> Inspect Element) and go to the network tab. Refresh and you'll see all of the javascript files and their sizes, one per package. You can filter for only Scripts and then sort by size (you may have to do a full refresh to clear out the cache for this to work). jQuery will probably be one of, if not the biggest package.
You can also run meteor with the --production flag and the server will send one concatenated and minified js file. This is much smaller than the total size of the individual package files, but shows you the actual size of the data that will be sent in production.
You also need to be aware of how much data you are publishing/subscribing. If you add the meteorhacks:fast-render package, the initial published set of data will be added as a script tag to the HTML. You should also be aware of how much data you are publishing while the user browses and uses your application. Something like Kadira is helpful with that.
I'm looking to get structured article data from webpage urls. So far I've found these two services http://www.diffbot.com/ and http://embed.ly/extract/demos/nlp. Are there better alternatives or is it worthwhile to write the code to do this myself?
If you'd like to skip the code, and are looking for a simple software for web scraping / ETL applications, I'd suggest Foxtrot. It's easy enough to use and doesn't require coding. I use it to scrape data from certain gov't websites and dump it into an Excel spreadsheet for reporting purposes.
I have done web scraping / content extract for quite some time now.
For me the best approach is to write a Chrome content extension and automate the browser with their API. This requires that you know Javascript and HTML. In one of my recent projects I use a background page with a couple of editable divs to configure the scraping session. I have some buttons on the background page to start the process. The background page loads a JS script which listens to click events of the buttons.
When one of the buttons is clicked I add a new tab for the scraping session with chrome.tab.create. The background js also defines some chrome.tabs.onUpdated.addListener to inject content scripts when the tab url contains a specific page/domain name.
The content script then does the scraping job for example selecting some elements with jquery, regular expressions etc and finally send a message with an object back to background JS using chrome.runtime.sendmessage. The background JS script listens to messages with chrome.runtime.onMessage.addListener and acts based on the content being extracted.
The extension also automates web databases by clicking for example the next page links.
I have added a timing setting to control the amount of links being clicked / tabs being opened per minute so that the access is slowed down on purpose and too much crawling is avoided.
Finally the results are being uploaded to a database with an AJAX call and inserted with a PHP page into MySQL.
When the extension runs the next time it compares the keys/links which already exist in the database with another AJAX call and ensures that only new information is being extracted.
I have also built extension like the above with Firefox but the best and easiest solution for me is a Chrome/Chromium content extension.