How can I prevent bing/bat.js load Clarity? - ms-clarity

Since 8th December 2021, the bing/bat.js started to include Clarity.js from various domains like d.clarity.ms, e.clarity.ms, etc.
I've never asked and turned on this feature.
The Content Security Policy block it and generates errors at each page load.
On the bing.com/webmasters I did not find any way to turn on Clarity. The domain is not included in any Clarity projects.
Could you please help me to find a way to turn off the call of Clarity from bat.js?

I've found the way.
In Microsoft Advertising, the UET tag has to be re-configured. A checkbox appeared in the UET wizard that allows you to turn off Clarity.

Related

How to customize web-app (pages and UI) for different customers [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
We have an ASP.NET web-application which has become difficult to maintain, and I'm looking for ideas on how to redesign it. It's an employee administration system which can be highly customized for each of our customers. Let me explain how it works now:
On the default page we have a menu where a user can select a task, such as Create Employee or View Timesheet. I'll use Create Employee as an example.
When a user selects Create Employee from the menu, an ASPX page is loaded which contains a dynamically loaded usercontrol for the selected menuitem, e.g. for Create Employee this would be AddEmployee.ascx
If the user clicks Save on the control, it navigates to the default page.
Some menuitems involve multiple steps, so if the user clicks Next on a multi-step flow then it will navigate to the next page in the flow, and so on until it reaches the final step, where clicking Save navigates to the default page.
Some customers may require an extra step in the Create Employee flow (e.g. SecurityClearance.ascx) but others may not.
Different customers may use the same ASCX usercontrol, so in the AddEmployee.OnInit we can customize the fields for that customer, i.e. making certain fields hidden or readonly or mandatory.
The following things are customizable per customer:
Menu items
Steps in each flow (ascx control names)
Hidden fields in each ascx
Mandatory fields in each ascx
Rules relating to each ascx, which allows certain logic to be used in the code for that customer
The customizations are held in a huge XML file per customer, which could be 7500 lines long.
Is there any framework or rules-engine that we could use to customize our application in this way? How do other applications manage customizations per customer?
If your regular data is held in a database I'm not entirely sure why you'd want to have all of that customer specific information in an xml file. Move it into the database.
Next, there are many different kinds of rules engines out there. Considering you're using asp.net you might want to look at Windows Workflow for at least some of this. You might read the following: http://karlreinsch.com/2010/02/05/microsoft-rule-engines/
A long time ago I used a product called Haley Rules to drive a c# web app. It controlled everything from the screens that were available right down to the fields that appeared and whether they were required or not. It took awhile to get the team on board with how it worked, but once that happened bringing on a new client was extremely simple. Haley was since gobbled up by Oracle, but was probably the absolute best one out there.
Others you might be interested in are NxBRE and even nCalc. NxBRE is an actual rules engine which is a port of one built for java. nCalc on the other hand isn't a rules engine per se. However, if you can express your logic in simple boolean statements then it is extremely fast. I'm currently using this to drive page flow in one of our applications.
Some commercial ones include: FlexRule, iLog
Your existing rule engine tool supports your web application, which means it meets your needs already. You can use other "Rule Engine" like MS work flow, but IMO it can also end with a hard to maitain situation.
Let's say there is registration portal. It collects general user infomation and save them into database. Simple. we build one protal for one client with several ASCXs and Rules.Then for another client,we add more rules and more controls to these ASCXs. Working in this way, sooner or later we will reach the final straw client. At that time the code base is hard to maitain and devs lost themselves in lots of rules. It is what happened to me.
So to me, it is not about which Rule engine to use.
Then How?
I have raised a question, and one of the answer makes sense to me( thought not a picked answer). In this answer, the guy mentioned what kind of company you are. In your question it is more like which department you are or do you want to seperate your dev teams.
If you are in a archetect teams, build a framework with a rule engine. Create a basic registraion portal as a sample portal.Make DAO,BO decoupled with UI (Seperate layers).
If you are in a customise teams, create customised user control (dont reuse these user control in basic version). What you will recreate is just UI, you can still use DAO,BO as they are not defined in user control, they are at other layers. In this way you get the freedom to define your client specified rules without worring about contaminating other clients rules or introducing new bugs to other client's registrations.
Just realise it is not like an answer to your question. Anyway it is my thoughts after limited xp of working on a engine rule based ,multi-clients web application.

How do you find the balance between Javascript (jQuery) and code behind in ASP.NET

Stackoverflow members,
How do you currently find the balance between javascript and code behind. I have recently come across some extremely bad (in my eyes) legacy code that lends itself to chaos (someHugeJavafile.js) which contains a lot of the logic used in many of the pages.
Let's say for example that you have a Form that you need to complete.
1. Personal Details
2. Address Information
3. Little bit more about yourself
You don't want to overload the person with all the fields at once, so you decide to split it up into steps.
Do you create separate pages for Personal Details, Address Information and a Little bit more about yourself.
Do you create controls for each and hide and show them on a postback or using some update panel?
Do you use jQuery and do some checking to ensure that the person has completed the required fields for the step and show the new "section" by using .show()?
How do you usually find the balance?
First of all, let's step back on this for a moment:
Is there a CMS behind the site that should be considered when creating this form? Many sites will use some system for managing content and this shouldn't be forgotten or ignored at first glance to my mind.
Is there a reason for having 3 separate parts to the form? I may set up a Wizard control to go through each step but this is presuming that the same outline would work and that the trade-offs in using this are OK. If not, controls would be the next logical size as I don't think a complete page is worth adopting here.
While Javscript validation is a good idea, there may be some browsers with JavaScript disabled that should be considered here. Should this be supported? Warned about the form needing Javascript to be supported?
Balance is in the eye of the beholder, and every project is different.
Consider outlining general themes for your project. For example: "We're going to do all form validation client-side." or "We're going to have a 0 refresh policy, meaning all forms will submit via AJAX." etc.
Having themes helps answers questions like the one you posted and keeps future developers looking in the right places for the right code.
When in doubt, try to see your code through the eyes of someone who has never seen it before (or as is often the case, yourself 2 to 3 years down the road), and ask yourself: "Based on the rest of the code, where would i look for this function?"
Personally, I like option number 3, but that's just because it fits best with the project I'm currently working on and I have no need to postback or create additional pages.

Give away signs that a site is Drupal? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm trying to alter my site in such a way so that when people view it, they don't know it's powered by Drupal. So, was wondering if there are any signs that give this away that I should know about?
Some of the giveaways I know of, are:
When adding content, it will say "node/add".
If the following file exists: misc/favicon.ico
etc
I'm looking for similar stuff?
Let's look at a fairly customized page based on Drupal: http://gemini-lights.com/ (a random page from the Drupal sites repository).
There are many giveaways:
if you change www.example.com/link/link2 to www.example.com/?q=link/link2 and it still works and points to the right page
www.example.com/user/1 gives you a profile page
resources (imgs, css, etc) are in /sites/all|example.com/themes/ or sth similar
there are CSS classes applied to many key elements of the site (like body) that do not change appearance - Drupal uses them to provide some info about the state of the page (like <body class="front not-logged-in page-front-page two-sidebars">)
probably many others
My advice is: don't try too hard with hiding the CMS of your website, if a hacker wants to find out what CMS you are running, he/she will find out. I'd focus on keeping the CMS up-to-date (Drupal makes this easy) and also watching out which modules you are installing - they are the most likely attack vectors.
Since this question is still getting many hits, let me update it with an example of a website of a major company (one of the biggest telephone companies in Poland), that, to my (pleasant) surprise ,is using Drupal for its main site, http://dialog.pl/:
The usual giveaway pages like /user/1, /login, etc. redirect to main page, so you can see the creators of the site have done their homework ;)
...but the source of the page contains my favourite give away: the usage of the zen theme: urls like /sites/all/themes/zen-dialog-main-page/../zen/css/page-strona_glowna.php or CSS styles applied: <body class="front not-logged-in node-type-page two-sidebars">
One more give away is the update.php page that has the familiar Garland theme (props to Kevin for this one).
As you can see, it's still possible to tell that the website is using Drupal - and this is a website of a major corporation. So the above advice still holds: don't waste your resources on trying to hide the CMS you used, keep it up to date (that's why the update.php file is probably still in place), monitor security vulnerabilities, use strong passwords, etc.
You're wasting your time:
Obscurity is not a form of security. And trying to hide Drupal may only tempt a hacker to beat you.
If there is a security flaw, you will almost certainly miss it and the hacker only has to try a specific attack vector. He or she is not going to check if it's Drupal or not. Your attack may come from software that won't care.
The changes you make to hide Drupal may actually make your site less secure. Especially if you change the core and are no longer able to tell if your site is up-to-date.
It's very likely that the effort you spend hiding Drupal can, instead be applied to a proven, effective security policy and get better results.
Login page is /user or /user/login
Admin page is /admin or ?q=admin
/node displays a listing of the latest nodes
/node/n where n is a number displays the node with that number (for example /node/1 displays the first node ever created)
The word 'node' or 'views' in objects' classes in view source.
In things which are paginated, page 2 is actually displayed as page/1 or /1 in the URL (Drupal pagination URLs are sort of geeky like that).
Like others have said, don't worry too much about this. It's a waste of time. Just keep Drupal core and all your modules up to date (you can even set it to email you when security releases are released for your installed modules) and you shouldn't have to worry about a thing.
Quick ways to find out if a site is a Drupal site.
Browse the source code and search for or Drupal.settings (appears on all sites using the google analytics module)
go to www.example.com/CHANGELOG.txt if Drupalsite, will show the current version.
There are a lot of other ways that indicate if a site is a Drupal site, but the above is fast and certain.
Other signs would be.
markup:
<div id="node-2020 ... (divs with id node-[number])
<div class="views- ... (divs with a class of views-[something]
class="clear-block" (clear-block is the drupal implementation of the clear-fix CSS trick)
Urls:
node
node/[number]
node/add
admin -> giving 403
admin/build/modules -> giving a 403
HTTP Expires header set to Dries' (the creator of Drupal) birthday
greggles (lead of the Drupal Security team) wrote an article about hiding the fact a site is running Drupal: Hiding the fact your site runs Drupal OR fingerprinting a Drupal site.
Some of the things that allow to know when a site is using Drupal can be altered, but in some cases it is not worth, or it requires resources that would be better spent doing something else, such as making Drupal more secure, or avoiding security holes in the site.
For example, the messages given to the users from the modules are an indication the site is running Drupal (and what version exactly), but altering those messages would mean change them every time a new module is installed, or a new version of a module is installed. The CSS classes is something else that helps understanding when a site is running Drupal, but changing them is not that easy, as some modules depend on a specific CSS class to work. The fact the JavaScript code uses a Drupal object also helps in catching a Drupal site.
New Answer to old question. This site will tell you if a site is built with Drupal, and could give your game away. It does give false negatives though, so it might be worth it to test it out with that website and see how well you can obfuscate.
You can't really escape people's suspicions. To do so, you'd have to change file-systems, stylesheets, markup, etc. This is unreasonable. Why does it matter if you're using Drupal?
I find http://wappalyzer.com Chrome extension an excellent tool for detecting what a site is powered by. This goes beyond detecting just Drupal and lists many of the 3rd party tools and underlying technologies a site uses.
People who knows Drupal may identify it by the source. But Drupal has no Generator Header like Joomla or others.
The expires headers are pretty unique as well. In fact they are set to Dries Buytaert (creator of Drupal) date of birth. As far as I can tell they have been set like the below since Drupal 4.6.
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Dead giveaway:
Try going to update.php, you'll get Access Denied (and the Garland theme).
Chrome has an add-on called Chrome Sniffer that shows what CMS any site is built on.

Automated link-checker for system testing [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I often have to work with fragile legacy websites that break in unexpected ways when logic or configuration are updated.
I don't have the time or knowledge of the system needed to create a Selenium script. Besides, I don't want to check a specific use case - I want to verify every link and page on the site.
I would like to create an automated system test that will spider through a site and check for broken links and crashes. Ideally, there would be a tool that I could use to achieve this. It should have as many as possible of the following features, in descending order of priority:
Triggered via script
Does not require human interaction
Follows all links including anchor tags and links to CSS and js files
Produces a log of all found 404s, 500s etc.
Can be deployed locally to check sites on intranets
Supports cookie/form-based authentication
Free/Open source
There are many partial solutions out there, like FitNesse, Firefox's LinkChecker and the W3C link checker, but none of them do everything I need.
I would like to use this test with projects using a range of technologies and platforms, so the more portable the solution the better.
I realise this is no substitute for proper system testing, but it would be very useful if I had a convenient and automatable way of verifying that no part of the site was obviously broken.
We use and really like Linkchecker:
http://wummel.github.io/linkchecker/
It's open-source, Python, command-line, internally deployable, and outputs to a variety of formats. The developer has been very helpful when we've contacted him with issues.
We have a Ruby script that queries our database of internal websites, kicks off LinkChecker with appropriate parameters for each site, and parses the XML that LinkChecker gives us to create a custom error report for each site in our CMS.
I use Xenu's Link Sleuth for this sort of thing. Quickly check for no deadlinks etc. on a/any site. Just point it at any URI and it'll spider all links on that site.
Desription from site:
Xenu's Link Sleuth (TM) checks Web
sites for broken links. Link
verification is done on "normal"
links, images, frames, plug-ins,
backgrounds, local image maps, style
sheets, scripts and java applets. It
displays a continously updated list of
URLs which you can sort by different
criteria. A report can be produced at
any time.
It meets all you're requirements apart from being scriptable as it's a windows app that requires manually starting.
What part of your list does the W3C link checker not meet? That would be the one I would use.
Alternatively, twill (python-based) is an interesting little language for this kind of thing. It has a link checker module but I don't think it works recursively, so that's not so good for spidering. But you could modify it if you're comfortable with that. And I could be wrong, there might be a recursive option. Worth checking out, anyway.
You might want to try using wget for this. It can spider a site including the "page requisites" (i.e. files) and can be configured to log errors. I don't know if it will have enough information for you but it's Free and available on Windows (cygwin) as well as unix.
InSite is a commercial program that seems to do what you want (haven't used it).
If I was in your shoes, I'd probably write this sort of spider myself...
I'm not sure that it supports form authentication but it will handle cookies if you can get it going on the site and otherwise I think Checkbot will do everything on your list. I've used as a step in build process before to check that nothing broken on a site. There's an example output on the website.
I have always liked linklint for checking links on a site. However, I don't think it meets all your criteria, particularly the aspects that may be JavaScript dependent. I also think it will miss the images called from inside CSS.
But for spidering all anchors, it works great.
Try SortSite. It's not free, but seems to do everything you need and more.
Alternatively, PowerMapper from the same company has a similar-but-different approach. The latter will give you less information about detailed optimisation of your pages, but will still identify any broken links, etc.
Disclaimer: I have a financial interest in the company that makes these products.
Try http://www.thelinkchecker.com it is an online application that checks number of outgoing links, page rank , anchor, number of outgoing links. I think this is the solution you need.

What options are there to find out if my ASP.NET MVC view is not XHTML compliant [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
Clarification: Thanks for the suggestions of tools for validating XHTML. I'm primarily looking for a solution that will run server side (or on the client with jQuery) so i can set it and forget it during development - and get told when I have issues without having to run a tool all the time.
All this tag soup stuff scares me with ASP.NET MVC !!
I'd be less scared if I could validate each and every view coming out of my view generator for XHTML compliance. This is especially important with the amount of jQuery I am planning on writing. i dont want to spend hours debugging something just to find out I had an unclosed tag somewhere that prevented a selector from working.
What options are there for this. Off the top of my head I'm looking for solutions like the following, but not sure which of these are practical:
JQuery XHTML checker
IIS filter
Browser plugin (I assume theres something for Firebug plugin to do this)
Doing something clever with the MVC View classes (I'm not sure if this is possible or worthwhile persuing).
Modifying the HTML writer to check on 'flush()' and throw an exception if the output is not XHTML.
ASP.NET configuration option I'm not aware of to validate the page.
All options welcome answers !
I'd prefer a server side technology so during debugging I can throw a hard exception, and in production I can log any errors. It must validate the full page after the master page has been applied. Looking for warnings in the IDE is not a good enough solution!
As a plugin there's HTML Validator for Firefox.
w3c html validator is avalable as source so you can download it and do prety much anything you want with it. You could set it up with a list of URLs to constantly crawl your test server and log any errors.
As Jeremy said, the W3C validator is out there, but on a side note to check for the accessibility of your views/ site you can you TAW.
I realise this is slightly off topic, but it's still in an important part of web-site development.

Resources