I have been using Wireshark and I noticed that my plugin seem to track down everything I do. I would like to see what is sending back.
http://dynamic.hotbar.com/dynamic/hotbar/disp/3.0/sitedisp.dll?GetSDF&Dom=usinsuranceonline.com&Path=&SiteVer=0
content looks something like:
Headers:
HTTP/1.1 200 OK Connection: close Date: Thu, 19 May 2011 05:40:29 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Content-Length: 90 Content-Type: app/x-hotbar-xip20 Expires: 0
Content XIP_2.0|1.SDF|67|70|xŚˇ334‹330±00ąr+-ĪĢ+.-JĢKNĶĻĖÉĢKÕKĪĻ¨3ä ¨Ó¸3©Õ5ār‘qŁ©•åłE)Å\Č#
When I try to open it, it downloads me a sitedisp.dll file which does not say anything. How can I open/decode these type of files?
It seems to connect to IE plugin
If it is a .NET dll, you can see its contents quite easily with Reflector.
http://reflector.red-gate.com/download.aspx?TreatAsUpdate=1
That said, the Hotbar terms of service appear to explicitly say you shall not decompile or reverse engineer the product... The message does appear to be encrypted in some way, as Tony mentioned. Depending on the scheme they used, it may be terribly difficult to see the contents in clear text, and since your not supposed to be reverse engineering it, I vote that you just delete the program from your machine entirely and move on to something else.
sitedisp.dll is a DLL file, which means it's a binary which contains executable code. Opening it will probably not teach you much unless you know Assembler or if it's a .NET DLL, Intermediate Language. The data which is sent back to the server is not going to be part of that DLL.
It looks like that this: xŚˇ334‹330±00ąr+-ĪĢ+.-JĢKNĶĻĖÉĢKÕKĪĻ¨3ä ¨Ó¸3©Õ5ār‘qŁ©•åłE)Å\Č#
could be the data in some encrypted form, however I'm not sure.
Related
This is inside bash file:
s3cmd --add-header='Content-Encoding':'gzip' put /home/media/main.js s3://myproject/media/main.js
This is what I do to upload my backbone compressed file into Amazon S3.
I run this command every time I make changes to my javascript files.
However, when I refresh the page in Chrome, Chrome still uses the cached version.
Request headers:
Accept:*/*
Accept-Encoding:gzip, deflate, sdch
Accept-Language:en-US,en;q=0.8,es;q=0.6
AlexaToolbar-ALX_NS_PH:AlexaToolbar/alxg-3.3
Cache-Control:max-age=0
Connection:keep-alive
Host:myproject.s3.amazonaws.com
If-Modified-Since:Thu, 04 Dec 2014 09:21:46 GMT
If-None-Match:"5ecfa32f291330156189f17b8945a6e3"
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36
Response headers:
Accept-Ranges:bytes
Content-Encoding:gzip
Content-Length:70975
Content-Type:application/javascript
Date:Thu, 04 Dec 2014 09:50:06 GMT
ETag:"85041deb28328883dd88ff761b10ece4"
Last-Modified:Thu, 04 Dec 2014 09:50:01 GMT
Server:AmazonS3
x-amz-id-2:4fGKhKO8ZQowKIIFIMXgUo7OYEusZzSX4gXgp5cPzDyaUGcwY0h7BTAW4Xi4Gci0Pu2KXQ8=
x-amz-request-id:5374BDB48F85796
Notice that the Etag is different. I made changes to it, but when I refreshed the page, that's what I got. Chrome is still using my old file.
It looks like your script has been aggressively cached, either by Chrome itself or some other interim server.
If it's a js file called from a HTML page (which is sounds like it is), one technique I've seen is having the page add a parameter to the file:
<script src="/media/main.js?v=123"></script>
or
<script src="/media/main.js?v=2015-01-03_01"></script>
... which you change whenever the JS is updated (but will be ignored by the server). Neither the browser nor any interim caching servers will recognise it as the same and will therefore not attempt to use the cached version - even though on your S3 server it is still the same filename.
Whenever you do a release you can update this number/date/whatever, ideally automatically if the templating engine has access to the application's release number or id.
It's not the most elegant solution but it is useful to have around if ever you find you have used an optimistically long cache duration.
Obviously, this only works if you have uploaded the new file correctly to S3 and S3 is genuinely sending out the new version of the file. Try using a command-line utility like curl or wget on the url of the javascript to check this is the case if you have any doubts about this.
The Invalidation Method
s3cmd -P --cf-invalidate put /home/media/main.js s3://myproject/media/main.js
| |
| Invalidate the uploaded filed in CloudFront.
|
-P, --acl-public / Store objects with ACL allowing read for anyone.
This will invalidate the cache for the file you specify. It's also possible to invalidate your entire site, however, the command above shows what I would imagine you'd want in this scenario.
Note: The first 1000 requests/month are free. After that it's approximately $0.005 per file, so if you do a large number of invalidation requests this might be a concern.
The Query String / Object Key Method
CloudFront includes the query string (on origin) from the given URL when caching the object. What this means is that even if you have the same exact object duplicated, but the query strings are different, then each one will be cached as a different object. In order for this to work properly you'll need to select Yes for Forward Query Strings in the CloudFront console or specify true for the value of the QueryString element in the DistributionConfig complex type when you're using the CloudFront API.
Example:
http://myproject/media/main.js?parameter1=a
Summary:
The most convenient method of ensuring the object being served is the current would be invalidation, although if you don't mind managing the query string parameters then you should find it just as effective. Adjusting the headers won't be nearly as reliable as either method above in my opinion; clients handle caching differently in too many ways that it's not easy to distinguish where caching issues might be.
You need the response from S3 to include the Cache-Control header. You can set this when uploading the file:
s3cmd --add-header="cache-control:max-age=0,no-cache" put file s3://your_bucket/
The lack of whitespace and uppercase in my example is due to some odd signature issue with s3cmd. Your mileage may vary.
After updating the file with that command, you should get the Cache-Control header in the S3 response.
I'm loooking at an existing web forms app that I didn't write. It's working as expected in IE8 and FF, but fails in IE9 with:
"Internet Explorer cannot display the webpage"
The code is a simple handler that's doing a context.Response.Redirect.
Using Fiddler, I can see the 302 response, so everything looks fine.
Any ideas why IE9 behaves differently, or what I can do to fix?
Edit due to request for code:
Sure, here's the line:
context.Response.Redirect("file:" & Filename.Replace("/", "\"))
Fiddler shows:
HTTP/1.1 302 Found
Date: Thu, 09 Aug 2012 19:01:24 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
X-AspNet-Version: 2.0.50727
Location: file:J:\Replay\Meetings\Meetings-2012.pdf
Cache-Control: private
Content-Type: text/html; charset=utf-8
Content-Length: 254
<html><head><title>Object moved</title></head><body>
<h2>Object moved to here.</h2>
</body></html>
I'm asking just to make sure, but do you have J:\Replay\Meetings\Meetings-2012.pdf file locally on your disk? The file:// protocol is used only to access local files. I suppose it's ok, as you wrote it works as expected in other browsers.
If so, I've read that this kind of error can be caused by invalid url to the file.
Try to redirect like this:
context.Response.Redirect("file://" & Filename);
Let me know if this helps.
This may be a zone elevation issue. Specifically, IE tries to prevent sites in one security zone from elevating themselves to another security zone. Redirecting to your local machine from outside your local machine is considered dangerous.
Possible fixes (I am not sure if these will work in IE9):
1. Add the site that triggers these redirections to the Trusted zone.
2. Change your security settings. Notice the "Websites in less privileged web content zone can navigate into this zone" setting (Internet Options -> Internet Zone -> Custom Level). You need to set that to "Enable" or "Prompt" for the "My Computer Zone." I suspect this can be done by either adding the "My Computer Zone" to your zones list ( http://support.microsoft.com/kb/315933 ) or by editing the "My Computer Zone" directly (via the registry). You may also need to add the HKCU\Software\Microsoft\Internet
Explorer\Main\Disable_Local_Machine_Navigate key (set to 0 REG_DWORD).
I have searched regarding this content type,Content-Type: application/vnd.google.safebrowsing-update. though some articles are given on net regarding this type but i still don't understand :S
HTTP/1.0 200 OK
Content-Type: application/vnd.google.safebrowsing-update
X-Content-Type-Options: nosniff
Date: Tue, 16 Dec 2011 05:57:44 GMT
Server: Chunked Update Server
Content-Length: 459
X-XSS-Protection: 1; mode=block
Connection: Keep-Alive
application/ indicates that the content is intended to be opened by a specific application.
vnd. indicates that the MIME-type is defined by a specific vendor. This is useful when creating your own file types for your programs and intend to distribute them online. You can register specific MIME types to open with your program without the possibility of interefering with other opreation.
google. is the vendor in this case.
safebrowsing-update is the specific type of file from Google. The name is self-explanatory.
The content type tells you that it is application-specific (application/) and is an update to Google's Safe Browsing feature.
The Content-Type header, in the general, is to inform a client which parser to use for a server's response to a request. For example, a web browser might use one interface to render an RSS feed, another to render an HTML web page, and a third to display a PDF.
In this case, the reply is said to contain information about a list of web sites considered safe or unsafe by Google.
This is just making me angry. I can't figure out why all the resources in my page are being requested EVERY single time.
E.g. my site.css returns the following headers (using fiddler):
HTTP/1.1 200 OK
Server: ASP.NET Development Server/9.0.0.0
Date: Mon, 29 Nov 2010 17:36:21 GMT
X-AspNet-Version: 2.0.50727
Content-Length: 9093
Cache-Control: public, max-age=2592000
Expires: Wed, 29 Dec 2010 17:36:21 GMT
Last-Modified: Mon, 08 Nov 2010 17:20:16 GMT
Content-Type: text/css
Connection: Close
But every time I hit refresh I see all the resources (css,js,images) getting re-requested. I have control over the headers returned for any and all of these resources, but I haven't figured it out yet.
I have even debugged my ASP.NET app and the HttpModule is definitely being asked for the resources again.
Can someone give me an idea of what to do?
EDIT:
Ok, I removed must-revalidate, proxy-revalidate from the headers and that is getting me closer to where I want to be, now when I press back it still requests my css/js files when I press back.
Is there anything more I can do to avoid this?
The following links might be of help to you.
Differences in reload behavior between FF and IE
http://blog.httpwatch.com/2008/10/15/two-important-differences-between-firefox-and-ie-caching/
In a nutshell, your caching behavior is going to be determined by the headers and the browser you are using.
What browser are you using for testing? The back button is also handled differently.
Back Button (Browser Behavior)
And, finally, a breakdown of f5/ctrl f5, click, shift click, etc behavior between browsers:
What requests do browsers' "F5" and "Ctrl + F5" refreshes generate?
If you are handling the requests in your own module - which seems to be the case - and the request contains an If-Modified-Since header, you can use that to determine whether to respond with a 200 and sending the whole resource again, or just send a 304 and skip sending the js/css/etc contents.
Other than that, expect browsers to re-query resources on hitting F5 / Refresh. It is just that you may skip sending the whole js/css/etc and return a 304 if everything is OK.
Other than that, #Chris's answer covers pretty much everything else.
What are the request headers you see when you hit back?
Use case: user clicks the link on a webpage - boom! load of files sitting in his folder.
I tried to pack files using multipart/mixed message, but it seems to work only for Firefox
This is how my response looks like:
HTTP/1.0 200 OK
Connection: close
Date: Wed, 24 Jun 2009 23:41:40 GMT
Content-Type: multipart/mixed;boundary=AMZ90RFX875LKMFasdf09DDFF3
Client-Date: Wed, 24 Jun 2009 23:41:40 GMT
Client-Peer: 127.0.0.1:3000
Client-Response-Num: 1
MIME-Version: 1.0
Status: 200
--AMZ90RFX875LKMFasdf09DDFF3
Content-type: image/jpeg
Content-transfer-encoding: binary
Content-disposition: attachment; filename="001.jpg"
<< here goes binary data >>--AMZ90RFX875LKMFasdf09DDFF3
Content-type: image/jpeg
Content-transfer-encoding: binary
Content-disposition: attachment; filename="002.jpg"
<< here goes binary data >>--AMZ90RFX875LKMFasdf09DDFF3
--AMZ90RFX875LKMFasdf09DDFF3--
Thank you
P.S. No, zipping files is not an option
Zipping is the only option that will have consistent result on all browsers. If it's not an option because you don't know zips can be generated dynamically, well, they can. If it's not an option because you have a grudge against zip files, well..
MIME/multipart is for email messages and/or POST transmission to the HTTP server. It was never intended to be received and parsed on the client side of a HTTP transaction. Some browsers do implement it, some others don't.
As another alternative, you could have a JavaScript script opening windows downloading the individual files. Or a Java Applet (requires Java Runtimes on the machines, if it's an enterprise application, that shouldn't be a problem [as the NetAdmin can deploy it on the workstations]) that would download the files in a directory of the user's choice.
Remember doing this >10 years ago in the netscape 4 days. It used boundaries like what your doing and didn't work at all with other browsers at that time.
While it does not answer your question HTTP 1.1 supports request pipelining so that at least the same TCP connection can be reused to download multiple images.
You can use base64 encoding to embed an (very small) image into a HTML document, however from a browser/server standpoint, you're technically still sending only 1 document. Maybe this is what you intend to do?
Embedd Images into HTML using Base64
EDIT: i just realized that most methods i found in my google search only support firefox, and not iE.
You could make a json with multiple data urls.
Eg:
{
"stamp.png": "data:image/png;base64,...",
"document.pdf": "data:application/pdf;base64,..."
}
(extending trinalbadger587's answer)
You could return an html with multiple clickable, downloadable, inplace data links:
<html>
<body>
<a download="yourCoolFilename.png" href="data:image/png;base64,...">PNG</a>
<a download="theFileGetsSavedWithThisName.pdf" href="data:application/pdf;base64,...">PDF</a>
</body>
</html>