This question is the opposite of Header in gmail for thread hinting
I have a system that generates notifications for various things. A lot of these have the same subject line, but different content.
Is there anyway short of adding some kind of unique token in the subject line of forcing the emails to NOT be in the same thread, i.e. show up individually. Changing headers and/or content would be acceptable, but changing the subject line will scare people. Also, not all of the recipients are Google Apps/#gmail.com accounts so I can't use things like "+hash".
If it matters, the application is written in C# and ASP.Net.
Anyone know how to do this?
Google seems to weigh the subject line pretty heavily in their threading heuristic, so there doesn't appear to be much that you, as a sender, can do about it without unique-ifing
the subject lines somehow.
Adding a timestamp to the subject line seems to defeat the threading -- do you think
you could get your users to buy into that?
On the recipient's side, they could use the IMAP interface to bypass gmail's threading. And I hear Google is open to giving users the option to
disable the "conversations" feature -- it's obvious that there are a lot of people
out there who hate it!
You may be in luck now, check out this article - https://gsuiteupdates.googleblog.com/2019/03/threading-changes-in-gmail-conversation-view.html - because of the new requirement to have the mail header reference a previous email message ID, your system generated emails are probably no longer threading if your situation is like mine. Good for you, but that wasn't what I wanted for my use case! Enjoy.
Related
I want to prevent or hamper the parsing of the classifieds website that I'm improving.
The website uses API with JSON responses. As a solution, I want to add useless data between my data as programmers will probably parse by ID. And not give a clue about it in neither JSON response body nor header; so they won't be able to distinguish it without close inspection.
To prevent users from seeing it, I won't give that "useless data" to my users if they don't request it explicitly by ID. From an SEO perspective, I know that Google won't parse the page with useless data if there isn't any internal or external link.
How reliable would that technic be? And what problems/disadvantages/drawbacks do you think can occur in terms of user experience or SEO? Any ideas or suggestions will be very much appreciated.
P.S. I'm also limiting big request counts made in a short time. But it doesn't help. That's why I'm thinking of this technic.
I think banning parsers won't be better because they can change IP and etc.
Maybe I can get a better solution by requiring a login to access more than 50 item details for example (and that will work for Selenium, maybe?). Registering will make it harder. Even they do it, I can see these users and slow their response times and etc.
I have a Symfony project, that is now being tested on live system, and I'm using the delivery_address to prevent sending email to real recipients.
I needed also some exceptions for this, so I have used a very nice delivery_whitelist option (like here).
Right now, the white-listed emails are going both to original target and to delivery_address, but I'd like them to get sent only to the original, white-listed address.
Is that possible in some way?
As #userfuser suggested, here is my slightly altered comment as an answer:
Since your requirement contradicts the semantics of the configuration options, your only option seems to be adding some logic to your mail sending component. I am sure that you will be able to extract the white list and other options programatically so you can build your edge case.
Is there a way to determine if the request coming to a handler (lets assume the handler responds to get and post) is being performed by a real browser versus a programmatic client?
I already know that it is easy to spoof things like the User Agent and the Referrer, but are there other headers that are more difficult to spoof? Maybe headers that are not commonly available in classes like .net's HttpWebRequest?
The other path that I looked at is maybe using the Encrypted View State to send a value to the browser that gets validated on the server side, though couldn't that value simply be scraped from the previous response and added as a post parameter to the next request?
Any help would be much appreciated,
Cheers,
There is no easy way to differentiate because in the end, a post programitically looks the same to the server as a post by a user from the browser.
As mentioned, captcha's can be used to control posting but are not perfect (as it is very hard but not impossible for a computer to solve them). They also can annoy users.
Another route is only allowing authenticated users to post, but this can also still be done programatically.
If you want to get a good feel for how people are going to try to abuse your site, then you may want to look at http://seleniumhq.org/
This is very similar to the famous Halting Problem in computer science. See some more on the proof, and Alan Turing here: http://webcache.googleusercontent.com/search?q=cache:HZ7CMq6XAGwJ:www-inst.eecs.berkeley.edu/~cs70/fa06/lectures/computability/lec30.ps+alan+turing+infinite+loop+compiler&cd=1&hl=en&ct=clnk&gl=us
The most common way is using captcha's. Of course captcha's have their own issues (users don't really care for them) but they do make it much more difficult to programatically post data. Doesn't really help with GETs though you can force them to solve a captcha before delivering content.
Many ways to do this, like dynamically generated XHR requests that can only be made with human tasks.
Here's a great article on NP-Hard problems. I can see a huge possibility here:
http://www.i-programmer.info/news/112-theory/3896-classic-nintendo-games-are-np-hard.html
One way: You could use some tricky JS to handle tokens on click. So your server issues token-id's to elements on the page during the backend render phase. Log these in a database or data file. Then, when users click around and submit, you can compare the id's sent via the onclick() function. There's plenty of ways around this, but you could apply some heuristics to determine if posts are too fast to be a human or not, that is, even if they scripted the hijacking of the token-ids and auto submitted, you could check that the time between click events appears automated. Signed up for a twitter account lately? They use passive human detection that while not 100% foolproof, it is slower and more difficult to break. Many if not all of the spam accounts there had to be human opened.
Another Way: http://areyouahuman.com/
As long as you are using encrypted methods verifying humanity without crappy CAPTCHA is possible.I mean, don't ignore your headers either. These are complimentary ways.
The key is to have enough complexity to make for an NP-Complete problem in terms of number of ways to solve the total number of problems is extraordinary. http://en.wikipedia.org/wiki/NP-complete
When the day comes when AI can solve multiple complex Human problems on their own, we will have other things to worry about than request tampering.
http://louisville.academia.edu/RomanYampolskiy/Papers/1467394/AI-Complete_AI-Hard_or_AI-Easy_Classification_of_Problems_in_Artificial
Another company doing interesting research is http://www.vouchsafe.com/play-games they actually use games designed to trick the RTT into training the RTT how to be more solvable by only humans!
Often when I post a comment or answer on a site I like to keep an eye out for additional responses from other people, possibly replying again if appropriate. Sometimes I'll bookmark a page for a while, other times I'll end up re-googling keywords to locate the post again. I've always thought there should be something better than my memory for keeping track of pages I care about for a few days to a week.
Does anyone any clever ideas for this type of thing? Is there a micro-delicious type of online app with a bookmarklet for very short term followup?
Update I think I should clarify. I wasn't asking about Stack Overflow specifically - on the "read/write web" in general I add comments to blog posts, respond to google group threads, etc. It's that sort of mish-mash of individual pages on random sites that I would care to keep track of for seven-to-ten days.
For stackoverflow, I put together a little bookmarklet thing at http://stackoverflow.hewgill.com. I use it to keep track of posts that I might want to come back to later, for reference or to answer if nobody else did, or whatever. The backend automatically retrieves updates from the SO server and updates your list of bookmarklets.
In my head mostly. I occasionally forget things, but it works well enough.
That's a very interesting question you asked here.
I do th efollowing:
temp bookmarks in browser
just a tab in Firefox left opened for weeks :)
subscription to email\rss when possible. When email notification comes I often put it into special folder in my email tree.
Different logins, notification types etc are complicating following info in the web :(
Other interesting questions:
how to organize information storage (notes, saved web pages, forum threads etc) for current usage and as a read-only library, sync it between different PCs and USB disks, how to label (tag) it and search it
how to store old mails, conversations, chats,..?
store digital photos for future: make hard-copy printouts or just regulary rewrite it from CD to a new one
Click on your username, then Responses.
Beside IP blocking and probably using a cookie (if the user changes the IP but doesn't remove the cookie, the new IP is added to the banned list, so the IP has to be changed and the cookie has to be removed together to access the site), is there any tricks one can use to block an annoying user from a website, I know that nothing will work with a savvy user but I'm trying to make it harder for the less savvy ones, any suggestions?
Edit: I already have registration in my website, the point is that this is useless to stop determined users (they can simply create other accounts).
#rifferte,
Actually I'm already building a moderation section where moderators can remove posts and suspend members, also members can report abuse and spam, I'm not trying to make this impossible, simply there's no way to do this, I'm just trying to get rid of the less savvy ones (the majority), and not forever, I'm planning to block them for a certain period of time (probably a couple of days or something like that).
Any overt form of censure on an existing user could lead to the forum equivalent of an arms race. One school of thought pushed on the SO podcasts is to flag the offending user and remove their posts from normal view, but include it when they (the bad user) are looking at the site. That way, they think the community is ignoring them and it makes flaming less fun. If the site isn't trying to stop them but their efforts at flaming are fruitless, they will likely just walk away.
See also this blog by Jeff
One of the best approaches I've ever encountered is the "Tachy goes to Coventry" feature in vBulletin. Adding a user to this list places them on a global ignore list that applies to everyone, except themselves.
So, they continue posting and everything appears normal from their perspective, yet their posts don't disrupt other users. Amazingly, these users rarely seem to figure out what's going on, they're so satisfied with the havoc they think they're wreaking undeterred.
Disruptive users tend to fizzle out very quickly when everyone's ignoring them. Once they give up, you can bulk delete all of their content in one pass that takes relatively little administrative effort.
What sometimes seems to help is to:
Make sure that accounts need to be "mature" before they may post.
A reputation system not unlike stack overflow (Account gone = reputation gone) :)
Use authentication providers like OpenID. It is more work to create multiple accounts that way
The simple fact of the matter is: If someone can do everything after creating an account, the account does not have any extra value. Once an account has some extra value (i.e. someone needs to put some good work in an account to get more privileges) you'll see that abusers will probably go to other websites.
I believe you will be in a constant cat and mouse game if the user has that much time to burn.
Your best bet will be to involve some human element to the site's registration process, to properly research any particular users. Not elegant, but without knowing more about your site there isn't too much more one can say.
Now that the question has been further refined with extra information, I'd like to change my answer.
Problem users in forums site exist because other users feed them.
How about trying an approach where if you identify a problem user, then you silently hide their posts from your site from OTHER users, but not the problem user. The theory is, that the problem user 'thinks' that their post made it through, but since it's actually hidden from all other users, nobody will reply to the problem user, and with any luck, they'll go elsewhere where they're getting feedback.
Can you trust your "good" user base to flag bad/annoying users?
Something like craigslist: if a user is flagged as annoying by a few users, their account is temporarily unable to post for a period of time. If this happens a few times, their account is suspended?
Just a thought.