How to View source page using Wireshark? - networking

Is there a possibility that a website's source page can be captured and viewed using Wireshark ? I do not need the header packets, what I am looking for is the full source page of any site that I open while running the Wireshark.

Yes - in the list of packets, right-click and say "Follow TCP Stream". For uncompressed content, that's it.
If the content is gzipped, you need to save that output to a file, use a decent text editor (that won't break binary content) to strip away the headers, then run gunzip to decompress it.
(If anyone knows of a way to make Wireshark do all that itself, I'm all ears!)
Edit: Just noticed the 'chunked-encoding' tag... that makes it harder. Editing away the chunk headers in the text editor should be possible, but tedious if there are a lot of them.

Related

Shrinking MP4 for Wordpress

I am trying to compress a video for wordpress, as each time I open up my webpage the video barely loads and then freezes. How should I go about compressing the video (I have already zipped it and used a program, but at 324kb it still seems too large). I have heard something about changing the bitrate, is this helpful/how can I do that? I would like to keep it in an mp4 if possible.
The only way to change the bit-rate of a video file is to re-encode it. There are plenty of software that are capable of doing so, my favorite being avidemux which is free and reliable.
Open your file in the app, choose an encoding & a bit-rate, hit "save video" and you're good to go.
You might have to try a few different bit-rates until you get a file that will both load fast and look good on you website.
Be sure to always use the highest-quality source file available for the re-encoding operation, since re-encoding your video will always result in a decrease of your video's quality.

cache background image

Is there a way to "cache" background image.
For example..
Background image is 3x3px and it's set like this:
body {
background: #000 url(bg.png);
}
When refresh happens, background image "flickers" for second.
Is there a cross-browser solution? (for Apache/PHP server if that is relevant)
If you go to seo.hr and browse navigation,... you can see what I'm trying to do.
http://www.seo.hr/
http://www.seo.hr/usluge/izrada-stranica
http://www.seo.hr/usluge/optimizacija-za-trazilice
I think you need to determine first if the issue actually is a caching issue or if it's caused by the size of your image. You could use a program like Wireshark or Fiddler to do this, but to be honest it's overkill for your need and you probably already have a browser with developer tools.
Here's how you determine where an image is coming from in Chrome (the other browsers are similar).
Open your developer tools and go to the "Network" tab.
Find "bg.png" in the list of network requests and click on it's name. Below is an example of having selected a stack overflow image from this page.
Notice that it says status 200 (from cache). The browser didn't need to go out to the server and rerequire that resource. It used the cache. If that "from cache" text wasn't there it wasn't reusing cached resources.
There is also the potential that you'll get a status code of 304. That means that the server said the image wasn't modified since the last request that you made. You do make the server trip in that case.
Ok, so my image wasn't in cache... now what?
There are a few reasons that this could occur.
You're request headers aren't set to tell the browser to cache the image (also found in that same "Headers" tab that you would have seen that Status Code if the browser actually went to the server for the image). You'll want to set cache-control and expires to something that makes sense for you. Cache headers can get a bit complicated you may want to browse through this caching tutorial document.
Is it SSL? If so not all browsers cache this but most modern browsers do. Set cache-control: public on these images (and also expires).
The real question here is how do you fix this? Unfortunately, that's entirely dependent on the server and/or the framework that you are using. As the OP is using Apache, they can find great documentation on the Apache module mod_expires to figure out how to tweak caching for their site.
Yes!
You should decide whats more suitable for you, but at this time we have some methods, like:
Pure HTML/CSS
Javascript Only
Mixed HTML/CSS/Javascript
Using base64 to encode the image somewhere on the source code
At this point I recommend a mixed solution, using javascript. This will make it work on many browsers as possible.
There is a good tutorial at:
http://perishablepress.com/press/2009/12/28/3-ways-preload-images-css-javascript-ajax/
Having several images in one can take you a step beyond that, so check this sprites article:
http://www.alistapart.com/articles/sprites/
You can try to encode your image in base64 and put it directly into CSS source code. I found a question about pros and cons over here.
Make your tiled image much much larger, when the browser engine renders the page it has to multiply each tile to cover the entire width and length of your object, which results in bad performance with small tiles on large objects.
Small tiles -> more repetitions -> slower performance

HTTP: pictures not fully received. how to avoid that, i.e. force browser to try again?

I have written a small picture script which shows a directory listing with thumbnails and also previews of the pictures.
Directory listing example
Image preview example
Source code
In some cases, when you click through several image previews (you can also use the arrow keys left/right to do that faster), some images don't fully load (and they are only shown partly then).
I think this has started to appear more often since I am preloading the next few pictures but it also has appeared before. This also occurs most often if you switch the images very fast.
I wonder why this appears and how I can avoid this. I guess that the browser somehow looses some connection to the server (or the server closes it unexpectedly for some reason). Thus I tried to work around this by setting Content-Length (and I was hoping that the browser would reconnect automatically if the file was not received fully) but that didn't helped.
Also, in the browser, a normal reload of the page doesn't help, I have to force a full reload.

In CSS does the iepngfix.htc file get called once, or is it re-read for each element?

A site I am working on just exceeded the monthly bandwidth our host provides (25,000 MB) and when looking at the server stats and logs, I found TwinHelix's iepngfix.htc to be the #4 largest bandwidth drain. #4 hits:73939 KBytes:181035 /iepngfix.htc
I find this especially interesting because a .swf used as a background image on every page had only 3,918 hits compared to the 73,939 hits that iepngfix.htc received. Hard for me to believe that there are even that many IE6 users visiting this site.
This file is being called within screen.css in the following way:
img, div, input { behavior: url("iepngfix.htc") }
The only way I can explain this 4KB file eating so much bandwidth, is if it is being read and re-read for every single img, div, and input element, whether or not there is a PNG used and possibly for more browsers than just IE.
Am I understanding this correctly?
If anyone could help me understand how all this works, it would be much appreciated. Thanks!
It could be that caching is not properly set up for the .htc file extension in your web server. Check the response headers, e.g. using Firebug, for what caching instructions get served.
Also using Firebug's "Net" tab, you'll be able to see whether the URL gets loaded in non-IE browsers. It shouldn't, but you never know.

How does YouTube prevent video content from being saved/redistributed?

Sure, you can embed a YouTube video on any site, but the content ultimately must come from their server. What technology(ies) do they have that prevents us from saving/redistributing content?
From a protocol standpoint, you would think that anything that comes over the wire could be saved. I hope I am not the only guy on Earth who does not know how to "save" a YouTube video...
There are a couple of plugins for Firefox out there that let you save the content. Basically it parses the sourcecode and looks for the videofile (either .flv or .mp4) and downloads that directly. The flash player on the page just plays the supplied file. They could of course obfuscate the path to the video file, but that can be reverse engineered as well. They can't really do anything about it, because the video file has to be on the user's computer at some point, or if not, the stream could be intercepted as well.
eg. https://addons.mozilla.org/de/firefox/addon/6584/?src=api
Mostly it's a legal deterrent rather than technical. There are a plethora of programs out there that will allow you to download their video. But there are two things they do that help reduce unauthorized downloads:
Use is flash to control the download and playback.
Hosting video yourself is not cheap, and thus it's much easier to simply leave the video on youtube.
They don't do anything about it. Very likely your Flash viewer downloads a copy and puts in somewhere on your harddrive (under my Linux system with Firefox and Adobe Flash in /tmp). After you are done viewing the file is removed to save disk space, but since it is on your harddrive nothing prevents you from making a copy elsewhere.
You might want to look at the 'analogue hole', in the end, data still has to be displayed on your screen, or get through your speakers and what not. It's always theoretically possible to intercept it at that point, or even just record your audio-out into another machine.
So as far as the analogue hole goes, the only solution is to skip that, in this form:
(source: thisdomainisirrelevant.net)
Which is not that marketable.

Resources