Running an MVC2 site against IIS7 and would like to capture more detail of how users traverse the site - ideally to the point of being able to replay even the duration between mouse clicks - feedback of where people pause and/or backtrack.
I could do this with flash but that's no longer an option. Now it's just IIS7 via asp.net f4. IIS7 _should be able to provide this via 3rd party extensions - especially for this sort of niche need. I'm willing to consider client-side .net components but this sure seems to be the responsibility of the server.
[opps...does this belong on serverfault?]
thx
justSteve. Here is a solution that we have used:
http://www.seevolution.com/
I don't think that it gives time between clicks, but it does give very detailed tracking considering it's price (I don't know if that's an issue). We have really liked it. Fantastic detail.
You could also roll your own solution. Using jQuery and the $(document).click() function, you can log when they click, and the points on the screen. Then every couple of minutes, serialize it and fire it off to the server. You can get extremely fine-grained detail that way. The nice thing with seevolution is that they've done all of the work for you already, but it probably isn't as detailed as you would like.
JMax
Maybe not the "in-house" solution you're after but we are about to implement SessionCam at my company, which seems like a pretty good match for what you're looking for. Not having actually finished implementing it yet, I can't vouch for it in terms of quality at this point - but the description of the product certainly matches.
You aren't going to be able to capture the level of detail you need using a solely server-side solution. There needs to be a degree of client-side work - whether it's in flash or javascript - to capture things such as where the mouse is hovering (for heatmaps etc).
I personally haven't used this product, but a friend of mine spoke highly of it.
Clicktale
Related
Call me silly, but I'm looking for a way to use a website automatically, same time every day, but with the capability of responding to many different situations (all of them can be predicted so I can do it myself, I just need it to recognise it) The problem is im very new to programming as a whole and I'm totally lost on where to start with this project... can anyone help? Is it even possible for someone at my level?
To be simple. no. AI programming is very advanced and requires a lot of in-depth knowledge of programming. You might be able to hack together a solution using selenium, which is a tool for testing the UI.
There is also the possibility to make a bot that clicks on specific locations on the screen. they cannot respond to what is actually on the screen however. If the website is a bit slower than normal things can go wrong as well. That would not be the best solution.
I'm working on an a pretty big project right now and am trying to implement an MVP architecture. I'm starting to run across a instances where I think JQuery or Javascript might be better suited than server-side code. I'm looking for feedback on how others are implementing client-side programming into their enterprise applications. How are you structuring the client-side code and how do you determine when to use it?
Things that can make user say "wow". For example - Populating search results while user has just typed 3-4 character of search term. Just go back in past and think about Yahoo or Hotmail which used to postback to server when you clicked on "Create Message". But when google came they just did on client side without going to server. I bet you would have said "wow" to that. At least I did.
Things that can reduce server load. For example - Adding extra data entry row in HTML table, instead of doing it through round trip, Increase/Decrease of quantity etc.
These are just some example to sight. Even to do these things properly you need to go to server but that will be behind the scene using ajax. Other than this you need to select few more jquery plugins that you will use in your project. To name some are jQuery UI, jQuery Validation, jQuery AnythingSlider etc. There are too many of them.
Http://ClearTrip.com is one site that I envy for their UX. Visit their site from mobile device and you will get further clues about their UX work. Besideds just coding you need to have a person in your team who can work on these UX aspects.
Regarding how this fits into DDD: I've just recently started my journey into DDD but one hears a lot about command/query separation in that circle. Certainly if you are doing something that hits your domain (like fetching for auto-completion or certainly if you allow partial page submission to accomplish a domain command) you have to decide how it gets there and how the domain is structured to handle it.
I think two decisions are most relevant.
First, bits entirely in the browser and even those specifically in your application layer are outside your domain and thus, though covered in the layered architecture part of the DDD discussion, do not land in the entity/value/event/service, etc. discussion. If, however, you are using AJAX to interact with your application layer and in turn need to access your domain, you need to consider again two things in my mind.
(a) Are you separating commands and queries simply using different methods on your domain? Fine if you have a relatively small demand for either queries or commands and this will not seem like "noise" in your domain API. Otherwise, you have a separate bounded context...another domain modeled just for queries that your UI needs to avoid clutter on your domain. Regardless, you are doing something like JS->AJAX handler in application layer->domain (including a domain service).
(b) Is this a command or a query? Once you have (a) figured out, this lets you know where the access will land...then use the presentation layer's use case to elaborate the domain concept and put it into your ubiquitous language.
Second, you have the DTO vs direct to domain decision. This can be a religious war gathering topic, but usually the answer is "depends." I think there are cases for using DTOs and cases for not (within the same architecture)...just search for all the discussions around the topic and apply the pattern only where it adds value; I won't try to cover details here.
Hope this provides some insight or at least conversation magnet to which others will add.
I guess this question is a little too subjective. Looks like I'm just going to grab a view books on advanced javascript and study up on the JQuery library.
The data on our website can easily be scraped. How can we detect whether a human is viewing the site or a tool?
One way is by calculating time which a user stays on a page. I do not know how to implement that. Can anyone help to detect and prevent automated tools from scraping data from my website?
I used a security image in login section, but even then a human may log in and then use an automated tool. When the recaptcha image appears after a period of time the user may type the security image and again, use an automated tool to continue scraping data.
I developed a tool to scrape another site. So I only want to prevent this from happening to my site!
DON'T do it.
It's the web, you will not be able to stop someone from scraping data if they really want it. I've done it many, many times before and got around every restriction they put in place. In fact having a restriction in place motivates me further to try and get the data.
The more you restrict your system, the worse you'll make user experience for legitimate users. Just a bad idea.
It's the web. You need to assume that anything you put out there can be read by human or machine. Even if you can prevent it today, someone will figure out how to bypass it tomorrow. Captchas have been broken for some time now, and sooner or later, so will the alternatives.
However, here are some ideas for the time being.
And here are a few more.
and for my favorite. One clever site I've run across has a good one. It has a question like "On our "about us" page, what is the street name of our support office?" or something like that. It takes a human to find the "About Us" page (the link doesn't say "about us" it says something similar that a person would figure out, though) And then to find the support office address,(different than main corporate office and several others listed on the page) you have to look through several matches. Current computer technology wouldn't be able to figure it out any more than it can figure out true speech recognition or cognition.
a Google search for "Captcha alternatives" turns up quite a bit.
This cant be done without risking false positives (and annoying users).
How can we detect whether a human is viewing the site or a tool?
You cant. How would you handle tools parsing the page for a human, like screen readers and accessibility tools?
For example one way is by calculating the time up to which a user stays in page from which we can detect whether human intervention is involved. I do not know how to implement that but just thinking about this method. Can anyone help how to detect and prevent automated tools from scraping data from my website?
You wont detect automatic tools, only unusual behavior. And before you can define unusual behavior, you need to find what's usual. People view pages in different order, browser tabs allow them to do parallel tasks, etc.
I should make a note that if there's a will, then there is a way.
That being said, I thought about what you've asked previously and here are some simple things I came up with:
simple naive checks might be user-agent filtering and checking. You can find a list of common crawler user agents here: http://www.useragentstring.com/pages/Crawlerlist/
you can always display your data in flash, though I do not recommend it.
use a captcha
Other than that, I'm not really sure if there's anything else you can do but I would be interested in seeing the answers as well.
EDIT:
Google does something interesting where if you're looking for SSNs, after the 50th page or so, they will captcha. It begs the question to see whether or not you can intelligently time the amount a user spends on your page or if you want to introduce pagination into the equation, the time a user spends on one page.
Using the information that we previously assumed, it is possible to put a time limit before another HTTP request is sent. At that point, it might be beneficial to "randomly" generate a captcha. What I mean by this, is that maybe one HTTP request will go through fine, but the next one will require a captcha. You can switch those up as you please.
The scrappers steal the data from your website by parsing URLs and reading the source code of your page. Following steps can be taken to atleast making scraping a bit difficult if not impossible.
Ajax requests make it difficult to parse the data and require extra efforts in getting the URLs to be parsed.
Use cookie even for the normal pages which do not require any authentication, create cookies once the user visits the home page and then its required for all the inner pages.This makes scraping a bit difficult.
Display the encrypted code on the website and then decrypt it on the loadtime using javascript code. I have seen it on a couple of websites.
I guess the only good solution is to limit the rate that data can be accessed. It may not completely prevent scraping but at least you can limit the speed at which automated scraping tools will work, hopefully below a level that will discourage scraping the data.
this is my firts post here ever.
I have to develop an aplication for a group of people with special needs. The functionality is really trivial, however, i have no clue of how to do the interface for them to be able to use it.
Their intelectual habilities are perfect, they are actually studying high school, but one of them types with his nose which needless to say, is very dificult and another one types reaaaaaaally slowly with only one of his fingers and neither can use the mouse.
I was wondering if i could use javascript to develop a usable interface, based on huge grids or something like that or maybe you guys have a better idea.
Political incorrectness aside, why don't you ask them? You're talking about accessibility here, if they're using computers they must be able to tell you about what they like or dislike about user interfaces that they've encountered.
I'm going to split my answer into two parts - design and implementation.
From a design perspective, it's important not to be intimidated by the fact that the users use a computer in a different manner. Treat this like any other project. Observe how they currently use other apps, and ask about the kind of things that they find helpful, or have difficulty with. If they claim nothing is difficult, ask a teacher or assistant, who will be familiar with the kind of things they struggle with.
Once you've started implementation, try an idea and get initial feedback. If you simply ask how they find the prototype, they'll likely say it's ok. Instead, try observing them using it without saying anything or giving guidance. If they get stuck, let them find their own solution to the problem. If appropriate, you could ask the user to speak their thoughts out loud (e.g. "I need to save this form, so I'm scrolling to the bottom, and clicking save").
On the development side, try to use web standards (valid HTML, CSS and Javascript). People often point to the "Web Content Accessibility Guidelines 2.0" (WCAG2) but this is quite turse and hard to understand; there are many more friendly articles on "Web Accessibility".
Someone with a physical disability is likely to use an alternate input device, such as a "Switch", onscreen keyboard, head-tracking device, a device for pushing keys on the keyboard, or speech recognition. Many of these methods involve simulating the keyboard, so by far the most important thing is to consider the accessibility of your site without using a mouse. For example, try tabbing through the page to see if you can access all elements in a reasonable amount of time. Consider using the acesskey attribute to provide an easy way to jump to different parts of the page (using 0 through 9 is often recommended so you don't interfere with browser shortcuts).
Also make sure that no part of your site is time-dependant, as different users may take different amounts of time to perform a task. For example, don't use the onchange Javascript event to update a page based on a listbox selection. Ensure you have alt text for images, so it's accessible for speech recognition. make the pages short enough so that excessive scrolling isn't required, but not so short as to require following lots of links.
Those are just some ideas to get your mind going in the right direction - but there are many accessibility resources on the internet - steal freely, and don't reinvent the wheel.
I realise I haven't addressed your question about Javascript - that's because I think it's probably one of the less important considerations. If possible, use Progressive Enhancement techniques to make the site work with and without Javascript. You might also look into the WAI-Aria standard for giving semantics to your Javascript.
And finally, to reiterate my initial point - make something simple, show it to the users, tweek, and show again.
It doesn't really matter what technology you use. Use whichever suites you.
But, make sure that you make UI components BIG in size(Bigger buttons, bigger font, bold font, coloured font(are there any colour blind?). This is for the ease of use of people (you said someone types with nose).
Also, better to have audio as informative source along with the usual screen display whenever some wrong action is performed on the application. This way visually impaired people will be assisted more.
Do it well, you are doing a divine job.
The first thing that you should read up on is the Web Content Accessibility Guidelines written up by the W3C.
In a nutshell this document describes the basic principles for people with disabilities in general.
For your needs regarding persons with special needs, you might want to look at Jakob Nielsen's article on Website Usability for Children, wherein principles of web design for young children or people with otherwise limited cognitive ability are outlined.
A few guys on our team are of the opinion that every web page in the application should be a web user control. So you'll have all of your html + event handling in the Customer.ascx, for example, and there will be a corresponding Customer.aspx page that contains Customer.ascx control.
These are their arguments:
This practice promotes versatility, portability, and re-usability.
Even if the page is not re-used right now, it might be in a future.
Customer page might need to move to a different location or renamed sometimes in a future and moving user controls is easier.
This is a recommendation by MS for new development.
Is this really a recommendation for new development? Are there any drawbacks to this strategy? I agree that it's nice to have a user control on hand if the need arises, but it seems to be an overkill to do this to the entire application "just in case we need it later".
1, 2 & 3: Doing anything because "you might need it later" is a horrible strategy.
http://c2.com/xp/YouArentGonnaNeedIt.html
4: I have never read this and seriously doubt MS has ever said anything like this. Maybe some random article by one single person who has an MS tag or was an MVP or something, and a gullible junior dev took it as Gospel Truth.
Seriously complicates client-side script as the NamingContainer jiggery will prepend _ctl0 etc to everything sometimes.
I don't think MS ever recommended it. Request links to MSDN documentation.
Typically by the time your are done implementing something, and it is sufficiently complicated, you'll find a lot of "gotchas" when ever you try to "reuse" it. A good example is relative links in the user control that no longer work outside of their path.
Users don't need the ability to add/edit/delete Customers on every page. Indeed, you start to get into caching issues if you have these types of controls on every page. For example if on an Invoice page, you add a Customer, will the Invoice control be updated with the new Customer? All sorts of inter-control operability issues can manifest. These issues are hard to argue for, because of course, everyone's user control will be perfect, so this will never happen. ha ha right.
See if they can come up with an example where moving/renaming a user control actually saved time, instead of making it more complicated. Draw up an actual example and show the pros/cons of each.
Personally I'm not a fan, I think it adds a layer of complexity to the application that is not strictly necessary at the early stages. If you need to reuse a component that you had not previously thought would be reused, refactoring it into a user control at that point should not be very difficult.
I came across an application in .NET 1.1 that was written like that once. Someone must have heard that same misguided "best practice" and taken it for the absolute truth.
I agree that it adds a level of complexity that's mostly not needed. I usually find usercontrols more useful for something like portions of a page that are repeated on several pages. If you think you'd reuse the entire page... why not just use the original page?
I also don't understand the moving/renaming argument. It's not that difficult to rename/move a page. If you do what your colleagues are suggesting, you'd end up with a customers.aspx page that contains nothing but an orders.ascx file? I see more potential confusion/errors with that approach than by just renaming/moving a file.