I am using PAW to test API routes on a local application. Access (at this stage) is granted via a session cookie which is generated when logging into the application via the site (in browser - specifically Chrome).
I can manually copy the session cookies into PAW from Chrome (and it works), but is tedious considering I need to do this daily.
Is there a way (like in Postman) to capture/intercept cookies from Chrome in PAW?
As per Micha Mazaheri's comment to the original question, Paw doesn't currently (2015-12-23) support sharing sessions between Chrome and itself.
A future update to the program may include it. Answer shall be edited to reflect said update.
Related
I am working on a major update to my app and I wanted to take advantage of Firestore's offline persistence (web) as a caching layer (as described in this recent Firebase video: https://youtu.be/iQOTjUko9WM ).
I have a tester who swears she has no security-related extensions installed. Or even any sort of third-party app meant to 'harden' a device.
But my app's offline persistence does not work for her... and only her, out of about 20 testers.
In Chrome she sees a message "Audit usage of navigator.useragent navigator.appversion and navigator.platform" in her console (but hasn't toggled the triangle to see if there is more info).
In Edge she sees "tracking prevention blocked access to storage for..." and that is followed by "FirebaseError: Failed to get document from cache..."
Is there some sort of permission request needed in some cases? Or some sort of 'defensive' check I need to add before using getDocFromCache()?
I really want to use offline persistence as a cache to prevent unnecessary reads as the user moves around in my app. Thanks in advance!
Sounds like they've got strict tracking prevention mode enabled in Edge (and the equivalent in Chrome).
Have them check (for Edge) Privacy, search, and services -> Tracking Prevention. If it is set to Strict move back to Balanced, also check for Blocked trackers and/or allow an Exception.
I'm working on some app which should use Google Maps JS API (browser version).
And there exists one problem I've got very concerned about.
API keys have restriction to be used only from your domain, however, any request from your domain (for example, from code inspector) is considered a valid request. So, anyone can make a simple script and kick my quota out easily.
So, here is my question:
Is there any option or command to run to block such activity ? Like the script will load just one instance and then will not accept creation of a new one or something like that.
P.S. I know about free quota for mobile versions of API, but I need the browser to work too. Obviously, I don't load this in any public area, but anyone can pretend to be a client and even order some service for couple bucks, but then run the script to make an impact for thousands ((
How do I disable offline caching for firefox in ASP.NET or in IIS? I found this post:
Disabling browser caching for all browsers from ASP.NET
This doesn't address the issue completely. It just disables caching from the back button (when not in off-line mode).
Here is a simple scenario:
If user A logs on to his bank. User A is doing transactions and he even goes to update some personal data. Finally user A is done and logs off from his bank website. User A leaves the browser on, because he has another tab open downloading a file that is a few gigs. User B would like to go on to his email to send out some emails, so user A doesn't close the browser. He knows the security risks, because he has read what must be done once you log off of the site, but he doesn't want to stop the download. For user A, to have to redownload is too much time for him and well he is just your typical user and doesn't think user B (being a good friend of his) will do anything malicious. So then user B uses the browser. The first thing user B does is "work offline". User B now has all data from user A. The page has an off-line cache for user B to see. User B is now able to open the history to view those cached pages, or just simply click back if the page was left open (either way works). User B now has all the pages that user A has browsed to. So any sensitive data is now his.
Does anyone know if this is possible to control at the server level. I know in firefox you go to about:config, but that is not an option for the server to tweak. Even so this can be told to the user, but not every user is going to be able to do this (being too complicated for some users) or some users will just ignore the warnings out of laziness or just not reading what the page says. I know there will be that one person that will say, "oh well that's their own fault and they deserve that". I honestly think ignorance in this sense is not the user's fault. Consider an older person in their 80s who is not technology-centric (like my father who I constantly give him the do's and don't's about online, but he still doesn't really understand the risks completely).
So I reiterate again, is it possible to disable this kind of off-line caching at the server level? I also found this post:
http://forums.asp.net/post/1386380.aspx
Would this help at all? Any help please. Please be constructive and not start a debate. I think I have been very clear, and I have done a lot of research on this with no luck. Please note that only the off-line caching on firefox is what is giving the problem, on every other browser (or on firefox onlinle) the caching has been disabled as expected.
Update:
I actually already have what the last link suggests (http://forums.asp.net/post/1386380.aspx) and it still doesn't prevent the problem.
Disabling cache from server side is kind of impossible because server can only request the browser to not store in cache. Rest is up to the browser to follow it or not.
The best option is not to send the data to browser , so it is never cached, instead fetch it on demand using json/Xml or any thing you are comfortable with.
The only trick that worked for me was to remove all sensitive information from loading via regular page methods, and load it via ajax/jquery on window.ready event. Once I implemented callback and ajax the back button and 'work offline' problem got solved but rolling out that with ajax callback was really a big task.
I have an MVC application that I would like to add some custom stats to. For some of the stats, it would be nice to have a unique identifier for a device.
For example, if I have a unique id for a RSS subscriber, I can monitor the active number of RSS subscribers.
I was wondering if anyone knew of anything in the web request that could be used as an ID other than the IP (which can obviously change). Something like a device ID or something?
Here are some approaches to consider.
HTTP Headers
There are a few HTTP Headers you can look at that can help you identify a unique user or device - some would refer to the sim card, some refer to the device.
Here is a list that I derived from the headers that Google Adsense Mobile uses to help track their advertising:
x-dcmguid
x-up-subno
x-jphone-uid
x-em-uid
These are probably some very popular one's, but there would be more vendor/device specific headers that are popular. You could start gathering all the headers your site receives and count how many of each you receive and start building up your own database of common headers.
Some other approaches
Cookies
Cookies is something that can be set on the requesting agent (browser for example) and returned when the agent visits again. For a list of methods, check out Ever Cookie - the virtually permanent cookie - it works by using one of the following methods of which at least one will work:
- Standard HTTP Cookies
- Local Shared Objects (Flash Cookies)
- Silverlight Isolated Storage
- Storing cookies in RGB values of auto-generated, force-cached
PNGs using HTML5 Canvas tag to read pixels (cookies) back out
- Storing cookies in Web History
- Storing cookies in HTTP ETags
- Storing cookies in Web cache
- window.name caching
- Internet Explorer userData storage
- HTML5 Session Storage
- HTML5 Local Storage
- HTML5 Global Storage
- HTML5 Database Storage via SQLite
Combinations
It's also possible to come up with your own scheme, e.g. take the user-agent header, some other headers like accept, x-fowarded-for and the ip make a unique hash value of out them to more accurately determine the uniqueness of the agent.
There are many different mobile headers as seen here. I also hit a page of mine and store mobile headers from various devices for my own purposes here http://wap.defza.com/ua/ua.txt (also ua1.txt, ua2.txt etc)
The short answer is their isn't any (and with good reason given privacy concerns). The more helpful answer would be that this is something you would normally do using cookies. You set a cookie and then check that to identify the specific browser making the request.
Of course, this is by no means fool-proof as users can reject cookies, delete them and they can use many different browsers (each of which will have a different cookie). If you are being devious (and I wouldn't recommend this) you could use a Local Shared Object (Flash Cookie) as this is less likely to be removed. At the end of the day, though, if someone doesn't want to be tracked you can't force them to be.
Generally, though, if you want analytics and tracking then consider using a 3rd party solution like Google Analytics. This will give you very detailed data (albeit still relying on cookies and javascript) about your visitors and their browsing habits.
other than the IP
If your site doesn't require any sort of authentication in order to serve this content, the IP address is the only thing you could get to identify clients, and even this might not be unique, for example you could have two clients behind the same proxy => no way of distinguishing those requests in this case. Another possibility is to use cookies, but that sort of falls in the first category => authentication.
There is no identifier that's provided by a browser, privacy concerns make it very unlikely that any vendor would ever implement that, now at least.
The only option you have is some form of cookie.
For RSS feeds, you could conceivably embed a random unique ID in the feed URL every time its rendered, so you'd know when the person that retrieved that URL downloaded your feed. However, if the user shared that URL with others you'd have no real way of knowing.
I build ASP.NET websites (hosted under IIS 6 usually, often with SQL Server backends and forms authentication).
Clients sometimes ask if I can check whether there are people currently browsing (and/or whether there are users currently logged in to) their website at a given moment, usually so the can safely do a deployment (they want a hotfix, for example).
I know the web is basically stateless so I can't be sure whether someone has closed the browser window, but I imagine there'd be some count of not-yet-timed-out sessions or something, and surely logged-in-users...
Is there a standard and/or easy way to check this?
Jakob's answer is correct but does rely on installing and configuring the Membership features.
A crude but simple way of tracking users online would be to store a counter in the Application object. This counter could be incremented/decremented upon their sessions starting and ending. There's an example of this on the MSDN website:
Session-State Events (MSDN Library)
Because the default Session Timeout is 20 minutes the accuracy of this method isn't guaranteed (but then that applies to any web application due to the stateless and disconnected nature of HTTP).
I know this is a pretty old question, but I figured I'd chime in. Why not use Google Analytics and view their real time dashboard? It will require minor code modifications (i.e. a single script import) and will do everything you're looking for...
You may be looking for the Membership.GetNumberOfUsersOnline method, although I'm not sure how reliable it is.
Sessions, suggested by other users, are a basic way of doing things, but are not too reliable. They can also work well in some circumstances, but not in others.
For example, if users are downloading large files or watching videos or listening to the podcasts, they may stay on the same page for hours (unless the requests to the binary data are tracked by ASP.NET too), but are still using your website.
Thus, my suggestion is to use the server logs to detect if the website is currently used by many people. It gives you the ability to:
See what sort of requests are done. It's quite easy to detect humans and crawlers, and with some experience, it's also possible to see if the human is currently doing something critical (such as writing a comment on a website, editing a document, or typing her credit card number and ordering something) or not (such as browsing).
See who is doing those requests. For example, if Google is crawling your website, it is a very bad idea to go offline, unless the search rating doesn't matter for you. On the other hand, if a bot is trying for two hours to crack your website by doing requests to different pages, you can go offline for sure.
Note: if a website has some critical areas (for example, writing this long answer, I would be angry if Stack Overflow goes offline in a few seconds just before I submit my answer), you can also send regular AJAX requests to the server while the user stays on the page. Of course, you must be careful when implementing such feature, and take in account that it will increase the bandwidth used, and will not work if the user has JavaScript disabled).
You can run command netstat and see how many active connection exist to your website ports.
Default port for http is *:80.
Default port for https is *:443.