Using CSS media query for print stylesheets can be a great way to make websites more print-friendly:
p { color: grey; }
#media print {
p { color: black; }
}
For one project, we find that creating PDF files from webpages to send to clients is very efficient (better than starting from scratch).
For PDFing purposes, we've applied a few simple CSS rules via #media print to make the webpages more friendly in that format — remove navigation, certain footer elements, etc.
(Some people may want to download and print the PDFs at a later date, and that's fine. There will also be a link on each page to access the PDF we've created.)
However, it seems that for the general public's printing needs, it's advised to create stylesheets without much formatting: remove backgrounds, increase contrast, optimize font-size, and so on.
We haven't done that yet. Can there be more than one set of print rules — one applied when PDFing, and the other when printing to a printer? Or if not, what workarounds are there?
When the end user decides to print, they should have the option to also print to PDF if they have some type of PDF software already installed on their machine. For you to determine that for them, and know how it's configure on their machine and what software company they are going though, and version, is probably not the best idea for you to determine from a CSS perspective. I would personally simply rely on your print style-sheet as a main source for both avenues.
Some people will want to print these PDFs, and that's fine...
True, and if they really want to, they should already know that they have a PDF software app already installed on their machine. Let them choose how they should print, when they get to the print-preview window. But you can perhaps market/brand a creative help guide on how to print to PDF and where to they can get a free PDF software app to download.
Regards
I haven't found a way to specify via CSS if the browser is printing to a physical printer, or to a PDF format. (In addition to various browsers, it would also have to compatible on multiple operating systems.)
There are two solutions I've come up with:
Have two print stylesheets, and when creating PDFs, include the PDFing one, and/or comment out the one for the public. (Or have one stylesheet, and comment out relevant lines of code.)
Have two versions of the HTML document, e.g. index.php and pdf.html, each with different print rules. In this case, you would access the latter when you want to create a PDF, and the former would be the default. The issue will be managing two sets of content, so unless you can automate it, this wouldn't be advised.
Related
I've written a app that, among other things, lets users export data in a print-friendly format. It does this by generating a HTML file that contains print-related CSS (e.g. the #page media query). The resulting exported file is pure HTML, CSS and Javascript, no fancy frameworks.
We've also got a printer at work that automatically staples jobs together. So if you print 10 copies of a document that has 3 pages, it'll print 3 pages, staple those together, then repeat.
The HTML file the app exports has about 1,500 records in it, grouped by a field (e.g. Username). I'm using the page-break-before CSS property to force a page break at the end of each grouped section of data, but I'm wondering if there's a way to tell the printer to "end" a document there and start a new one so it'll be stapled?
Basically splitting one file up into several individual "documents", while only sending one job to the printer.
I'm pretty sure there isn't, and the solution is to just print the whole document and manually separate and staple the documents together, which I'm happy to do and will no doubt end up doing, but now I'm curious if there's a way to do a "soft end" to a printed page, in the same way that you can force a page break using CSS.
No, there is no way to instruct the printer to staple a job together based on page breaks specified in the HTML/CSS of the exported file.
You might be able to achieve this with a generated PDF, or rather multiple PDF that you send at once to the printer.
Our web site has been under a constant development for a better part of the last five years. As it happens, pretty much all the styles for the site are in one big CSS file. With time this css file has grown to about 9,000 lines - and I'm sure some of those styles are not used any more and quite a few styles provide duplicate functionality.
The site is written with PHP/Smarty; there are over 300 smarty templates and the whole site contains over 1000 different pages (read - unique URLs). I'm sure it will continue growing - as will the CSS file.
What's the best way to clean up this file?
Update: Unfortunately, online parsers where I put in a URL won't work for me, as 75% of the site is behind username/password logins - and depending on login, there are half a dozen different roles, each of which has their own set of of pages. There are also transactional elements (online shop), where the pages are displayed after (for example) credit card payment is taken/processed. I doubt that any online tool would be able to handle any of these. Therefore if there's a tool, it would have to work on a source tree.
Short of going through each .tpl file and searching the file for the selectors manually, I don't see any other way.
You could of course use Dust-Me selectors, but you'd still have to go through each page that uses the .tpl files (not each url as I know that many of them will be duplicates).
Sounds like a big job! I had to do it once before and I did exactly that, took me a week.
Another tool is a Firebug plugin called CSS Usage. As far as I read it can work across multiple pages but might break if used site-wide. Give it a go.
Triumph! Check out the Unused CSS online tool. Type your index url into the field and voila, a few minutees later a list of all the used selectors :) I know you want the unused ones, but then the only work is finding the unused ones in the file (ctrl+f) and removing them :)
Make sure to use the 2nd option, they'll email you the results of the crawl of your entire webpage. Might take up to half an hour, but that's far better than a week. Take some coffee :)
Just tested it, works a treat :)
I had to do this about 3 years ago on a rather large classic ASP web application.
I took the approach that there are only a finite number of styled items on each page and started by identifying these. For example, I went through the main pages and identified that the majority of labels were bold and dark blue and that all buttons are the same width (for example).
Once I'd done that, I spoke to the team and we decided that anything that didn't conform to these rules I'd identified should conform, so I wrote a stylesheet based on this assumption.
We ended up with about 30 styles to apply to several hundred pages. Several regular-expression-find-and-replaces later (we were fortunate that the original development had used reasonably well structured HTML) we had something usable that just needed the odd tweaking.
The key points are:
Aim for uniformity across the site. In other words, don't assume that the resultant site will look exactly the same as the original, but aim for it to look the same as itself (uniform) from page to page
Tackle the obvious styles first (labels / buttons / paragraph fonts / headers) and then worry about the smaller styles or the unique styles later
You might also find that keeping unique styles (e.g. a dashboard page that has unique elements that don't appear elsewhere) in separate files to keep the size of the file down. Obviously, it depends on your site as to whether this would help.
Additionally, there are many sites that will search for these for you. Like this one: http://unused-css.com/ I don't know how they measure up to Dust-Me Selectors, but I know that Dust-Me selectors isn't compatible with Firefox 8.0.
You could use Dust-Me Selectors plugin for FireFox to find unused styles:
http://www.sitepoint.com/dustmeselectors/
If you have a sitemap you could use that to let the plugin crawl your site:
The spider dialog has all the controls for performing a site-wide spider operation. Enter the URL of either a Sitemap XML file, or an HTML sitemap, and the program will read that file and extract all its links. It will then load each of those pages in turn and perform a cumulative Find operation on each one.
I see there's not a good answer yet. I have tried the "Unused CSS online tool" and seems to work ok for public sites. The problem is if you have a CSS to show your public website + an intranet (for example: wordpress site + login for registered users). The intranet pages woun't be tracked and you will lose your css styles.
My next try will be using gulp + uncss:
https://github.com/ben-eb/gulp-uncss
You have to define all the urls of your site (external and internal) and (maybe; not sure) if you are running the site with user + password on your browser, gulp+uncss can go inside the internal url's.
Update: I see unused-css online tool has a login solution!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Is there any advantage to having a single monster .css file that contains style elements that will be used on almost every page?
I'm thinking that for ease of management, I'd like to pull out different types of CSS into a few files, and include every file in my main <link /> is that bad?
I'm thinking this is better
positions.css
buttons.css
tables.css
copy.css
vs.
site.css
Have you seen any gotchas with doing it one way vs. the other?
This is a hard one to answer. Both options have their pros and cons in my opinion.
I personally don't love reading through a single HUGE CSS file, and maintaining it is very difficult. On the other hand, splitting it out causes extra http requests which could potentially slow things down.
My opinion would be one of two things.
1) If you know that your CSS will NEVER change once you've built it, I'd build multiple CSS files in the development stage (for readability), and then manually combine them before going live (to reduce http requests)
2) If you know that you're going to change your CSS once in a while, and need to keep it readable, I would build separate files and use code (providing you're using some sort of programming language) to combine them at runtime build time (runtime minification/combination is a resource pig).
With either option I would highly recommend caching on the client side in order to further reduce http requests.
EDIT:
I found this blog that shows how to combine CSS at runtime using nothing but code. Worth taking a look at (though I haven't tested it myself yet).
EDIT 2:
I've settled on using separate files in my design time, and a build process to minify and combine. This way I can have separate (manageable) css while I develop and a proper monolithic minified file at runtime. And I still have my static files and less system overhead because I'm not doing compression/minification at runtime.
note: for you shoppers out there, I highly suggest using bundler as part of your build process. Whether you're building from within your IDE, or from a build script, bundler can be executed on Windows via the included exe or can be run on any machine that is already running node.js.
A CSS compiler like Sass or LESS is a great way to go. That way you'll be able to deliver a single, minimised CSS file for the site (which will be far smaller and faster than a normal single CSS source file), while maintaining the nicest development environment, with everything neatly split into components.
Sass and LESS have the added advantage of variables, nesting and other ways to make CSS easier to write and maintain. Highly, highly recommended. I personally use Sass (SCSS syntax) now, but used LESS previously. Both are great, with similar benefits. Once you've written CSS with a compiler, it's unlikely you'd want to do without one.
http://lesscss.org
http://sass-lang.com
If you don't want to mess around with Ruby, this LESS compiler for Mac is great:
http://incident57.com/less/
Or you could use CodeKit (by the same guys):
http://incident57.com/codekit/
WinLess is a Windows GUI for comipiling LESS
http://winless.org/
I prefer multiple CSS files during development. Management and debugging is much easier that way. However, I suggest that come deployment time you instead use a CSS minify tool like YUI Compressor which will merge your CSS files into one monolithic file.
Historically, one of the main advantages x in having a single CSS file is the speed benefit when using HTTP1.1.
However, as of March 2018 over 80% of browsers now support HTTP2 which allows the browser to download multiple resources simultaneously as well as being able to push resources pre-emptively. Having a single CSS file for all pages means a larger than necessary file size. With proper design, I don't see any advantage in doing this other than its easier to code.
The ideal design for HTTP2 for best performance would be:
Have a core CSS file which contains common styles used across all pages.
Have page specific CSS in a separate file
Use HTTP2 push CSS to minimise wait time (a cookie can be used to prevent repeated pushes)
Optionally separate above the fold CSS and push this first and load the remaining CSS later (useful for low-bandwidth mobile devices)
You could also load remaining CSS for the site or specific pages after the page has loaded if you want to speed up future page loads.
You want both worlds.
You want multiple CSS files because your sanity is a terrible thing to waste.
At the same time, it's better to have a single, large file.
The solution is to have some mechanism that combines the multiple files in to a single file.
One example is something like
<link rel="stylesheet" type="text/css" href="allcss.php?files=positions.css,buttons.css,copy.css" />
Then, the allcss.php script handles concatenating the files and delivering them.
Ideally, the script would check the mod dates on all the files, creates a new composite if any of them changes, then returns that composite, and then checks against the If-Modified HTTP headers so as to not send redundant CSS.
This gives you the best of both worlds. Works great for JS as well.
Having only one CSS file is better for the loading-time of your pages, as it means less HTTP requests.
Having several little CSS files means development is easier (at least, I think so : having one CSS file per module of your application makes things easier).
So, there are good reasons in both cases...
A solution that would allow you to get the best of both ideas would be :
To develop using several small CSS files
i.e. easier to develop
To have a build process for your application, that "combines" those files into one
That build process could also minify that big file, btw
It obviously means that your application must have some configuration stuff that allows it to swith from "multi-files mode" to "mono-file mode".
And to use, in production, only the big file
i.e. faster loading pages
There are also some software that do that combining of CSS files at run-time, and not at build-time ; but doing it at run-time means eating a bit more CPU (and obvisouly requires some caching mecanism, to not re-generate the big file too often)
Monolithic stylesheets do offer a lot of benefits (which are described in the other answers), however depending on the overall size of the stylesheet document you could run into problems in IE. IE has a limitation with how many selectors it will read from a single file. The limit is 4096 selectors. If you're monolithic stylesheet will have more than this you will want to split it. This limitation only rears it's ugly head in IE.
This is for all versions of IE.
See Ross Bruniges Blog and MSDN AddRule page.
There is a tipping point at which it's beneficial to have more than one css file.
A site with 1M+ pages, which the average user is likely to only ever see say 5 of, might have a stylesheet of immense proportions, so trying to save the overhead of a single additional request per page load by having a massive initial download is false economy.
Stretch the argument to the extreme limit - it's like suggesting that there should be one large stylesheet maintained for the entire web. Clearly nonsensical.
The tipping point will be different for each site though so there's no hard and fast rule. It will depend upon the quantity of unique css per page, the number of pages, and the number of pages the average user is likely to routinely encounter while using the site.
I typically have a handful of CSS files:
a "global" css file for resets and global styles
"module" specific css files for pages that are logically grouped (maybe every page in a checkout wizard or something)
"page" specific css files for overrides on the page (or, put this in a block on the individual page)
I am not really too concerned with multiple page requests for CSS files. Most people have decent bandwidth and I'm sure there are other optimizations that would have a far greater impact than combining all styles into one monolitic CSS file. The trade-off is between speed and maintainability, and I always lean towards maintainability. The YUI comperssor sounds pretty cool though, I might have to check that out.
I prefer multiple CSS files. That way it is easier to swap "skins" in and out as you desire. The problem with one monolithic file is that it can get out of control and hard to manage. What if you want blue backgrounds but don't want the buttons to change? Just alter your backgrounds file. Etc.
Maybe take a look at compass, which is an open source CSS authoring framework.
It's based on Sass so it supports cool things like variables, nesting, mixins and imports. Especially imports are useful if you want to keep seperate smaller CSS files but have them combined into 1 automatically (avoiding multiple slow HTTP calls).
Compass adds to this a big set of pre-defined mixins that are easy for handling cross-browser stuff.
It's written in Ruby but it can easily be used with any system....
here is the best way:
create a general css file with all shared code
insert all specific page css code into the same page, on the tag or using the attribute style="" for each page
on this way you have only one css with all shared code and an html page.
by the way (and i know that this is not the right topic) you can also encode your images in base64 (but you can also do it with your js and css files). in this way you reduce even more http requests to 1.
SASS and LESS make this all really a moot point. The developer can set up effective component files and on compile combine them all. In SASS you can toggle off the Compressed Mode while in development for easier reading, and toggle it back on for production.
http://sass-lang.com
http://lesscss.org
In the end a single minified CSS file is what you want regardless of the technique you use. Less CSS, Less HTTP requests, Less Demand on the server.
The advantage to a single CSS file is transfer efficiency. Each HTTP request means a HTTP header response for each file requested, and that takes bandwidth.
I serve my CSS as a PHP file with the "text/css" mime type in the HTTP header. This way I can have multiple CSS files on the server side and use PHP includes to push them into a single file when requested by the user. Every modern browser receives the .php file with the CSS code and processes it as a .css file.
You can just use one css file for performance and then comment out sections like this:
/******** Header ************/
//some css here
/******* End Header *********/
/******** Footer ************/
//some css here
/******* End Footer *********/
etc
I'm using Jammit to deal with my css files and use many different files for readability.
Jammit doest all the dirty work of combining and compressing the files before deployment in production.
This way, I've got many files in development but only one file in production.
A bundled stylesheet may save page load performance but the more styles there are the slower the browser renders animations on the page you are on. This is caused by the huge amount of unused styles that may not be on the page you are on but the browser still has to calculate.
See: https://benfrain.com/css-performance-revisited-selectors-bloat-expensive-styles/
Bundled stylesheets advantages:
- page load performance
Bundled stylesheets disadvantages:
- slower behaviour, which can cause choppyness during scrolling, interactivity, animation,
Conclusion:
To solve both problems, for production the ideal solution is to bundle all the css into one file to save on http requests, but use javascript to extract from that file, the css for the page you are on and update the head with it.
To know which shared components are needed per page, and to reduce complexity, it would be nice to have declared all the components this particular page uses - for example:
<style href="global.css" rel="stylesheet"/>
<body data-shared-css-components="x,y,z">
I've created a systematic approach to CSS development. This way I can utilize a standard that never changes. First I started with the 960 grid system. Then I created single lines of css for basic layouts, margins, padding, fonts and sizes. I then string them together as needed. This allows me to keep a consistent layout across all of my projects and utilize the same css files over and over. Because they are not specific. Here's an example: ----div class="c12 bg0 m10 p5 white fl"/div--- This means that the container is 12 columns across, utilizes bg0 has margins of 10px padding of 5 the text is white and it floats left. I could easily change this by removing or adding a new - What I call a "light" style- Instead of creating a single class with all these attributes; I simply combine the single styles as I code the page. This allows me to create any combination of styles and does not limit my creativity or cause me to create a massive number of styles that are similar. Your style sheets become a lot more manageable, minimized and allow you to re-use it over and over. This method I have found to be fantastic for rapid design. I also no longer design first in PSD but in the browser which also saves time. In addition because I have also created a naming system for my backgrounds and page design attributes I simply change out my image file when creating a new project.(bg0 = body background according to my naming system) That means that if I previously had a white background with one project simply changing it to black simply means that on the next project bg0 will be a black background or another image..... I have not found anything wrong with this method yet and it seems to work very well.
I've recently begun working on a very large, high traffic website. We would very much like to reduce the size and number of our style sheets, minification is one route we will pursue but is anyone aware of any tools for checking ID and class use? Literally scanning the website to see what's active and what isn't?
Alternatively any software for redacting the css to reduce repition and size?
Thanks in advance
Literally scanning the website to see
what's active and what isn't?
Dust-Me Selectors is a Firefox plugin that you can use to show what css rules aren't being used.
http://www.sitepoint.com/dustmeselectors/
I can certainly recommend Page Speed (http://code.google.com/speed/page-speed/) by Google to check the performance (and possible improvements) of your webpages.
Page Speed also checks CSS and usage of classes on your webpages.
It is used in combination with Firebug.
Gzip compression in the webserver.
Expiry dates that lie far in the future to avoid redownloading the CSS files.
Alternatively any software for
redacting the css to reduce repition
and size?
Yet another level of indirection ... You (and your team) should write long CSS files with as many comments as needed and then write a tool that will publish merged files as needed (different templates need different files), stripped comments and minified as http://www.cleancss.com could do (CSSTidy). Readability comes first if you wan't to be able to modify a file in 1 month or keep track of modifications (or worse if sb else must do that!).
Other options are to reduce the number of templates used throughout the site. No need of two templates with 2px differences (grid layouts are a good way to stick to this) or inconsistent ways of displaying error messages. Define a common look and feel to your site and give instructions to webdesigners, if it isn't already done.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm working on a project which stores single images and text files in one place, like a time capsule. Now, most every project can be saved as one file, like DOC, PPT, and ODF. But complete web pages can't -- they're saved as a separate HTML file and data folder. I want to save a web page in a single archive, and while there are several solutions, there's no "standard". Which is the best format for HTML archives?
Microsoft has MHTML -- basically a file encoded exactly as a MIME HTML email message. It's already based on an existing standard, and MHTML as its own was proposed as rfc2557. This is a great idea and it's been around forever, except it's been a "proposed standard" since 1999. Plus, implementations other than IE's are just cumbersome. IE and Opera support it; Firefox and Safari with a cumbersome extension.
Mozilla has Mozilla Archive Format -- basically a ZIP file with the markup and images, with metadata saved as RDF. It's an awesome idea -- Winamp does this for skins, and ODF and OOXML for their embedded images. I love this, except, 1. Nobody else except Mozilla uses it, 2. The only extension supporting it wasn't updated since Firefox 1.5.
Data URIs are becoming more popular. Instead of referencing an external location a la MHTML or MAF, you encode the file straight into the HTML markup as base64. Depending on your view, it's streamlined since the files are right where the markup is. However, support is still somewhat weak. Firefox, Opera, and Safari support it without gaffes; IE, the market leader, only started supporting it at IE8, and even then with limits.
Then of course, there's "Save complete webpage" where the HTML markup is saved as "savedpage.html" and the files in a separate "savedpage_files" folder. Afaik, everyone does this. It's well supported. But having to handle two separate elements is not simple and streamlined at all. My project needs to have them in a single archive.
Keeping in mind browser support and ease of editing the page, what do you think's the best way to save web pages in a single archive? What would be best as a "standard"? Or should I just buckle down and deal with the HTML file and separate folder? For the sake of my project, I could support that, but I'd best avoid it.
My favourite is the ZIP format. Because:
It is very well sutied for the purpose
It is well documented
There a a lot of implementations available for creating or reading them
A user can easily extract single files, change them and put them back in the archive
Almost every major Operating System (Windows, Mac and most linux) have a ZIP program built in
The alternatives all have some flaw:
With MHTMl, you can not easily edit.
With data URI's, I don't know how difficult the implementation would be. (With ZIP, even I could do it in PHP, 3 years ago...)
The option to store things as seperate files just has far too many things that could go wrong and mess up your archive.
It is not only question of file format. Another crucial question is what exactly you want to store? Is it:
store whole page as it is with all referenced resources - images,
CSS and javascript?
to capture page as it was rendered at some point in time; a static
image of some rendered state of web page DOM?
Most current "save page as" functionality in browser, be it to MAF or MHTML or file+dir, attempts the first way. This is ultimately flawed approach.
Don't forget web pages there days are rather local applications then a static document you can easily store. Potential issues:
one page is in fact several pages build dynamically by JS, user interaction is needed
to get it to desired state
AJAX applications can do remote communication with remote service rendering it
unusable for offline view.
Hidden links in javascript code. Such resource is then not part of stored page.
Even parsing JS code may not discover them. You need to run the code.
Even position of basic html elements may be recomputed may be computed dynamically by
JS and it is not always possible/easy to recreate it locally.
You would need some sort of JS memory dump and load this to get page to desired state
you hoped to store
And many many more issues...
Check Chrome SingleFile extension. It stores a web page to one html file with images inlined using already mentioned data URIs. I haven't tested it much so I cannot say how well it handles "volatile" ajax pages.
PDFs are supported on nearly all browsers on nearly all platforms and store content and images in a single file. They can be edited with the right tools. This is almost definitely not ideal, but it's an option to consider.
Use a zip file.
You could always make a program/script that extracts the zip file to a temp directory and loads the index.html file in your browser. You could even use an index.ini/txt file to specify the file that should be loaded when extracting.
Basically, you want something like the Mozilla Archive format, but without the unnecessary rdf crap just to specify what file to load.
MHT files are good, but they usually use base64 to embed files, which will make the file size bigger than it should be (data URIs are the same way). You can add attachments as binary, but you'll have to manually do that with a hex editor or create a tool and support for it by clients might not be as good.
Of course, if you want to use what browsers generate, MHT (Opera and IE at least) might be better.
i see no excuse to use anything other than a zipfile
Well, if browser support and ease of editing are the biggest concerns I think you are stuck with the file+directory approach unless you are willing to provide an editor for the single file format and live with not very good support in browsers.
You can create a single file by compressing the contents. You can also create a parent directory to ease handling.
The problem is that html is bottoms up not top down. Look at your file name which saved on my box as "What's the best "file format" for saving complete web pages (images, etc.) in a single archive? - Stack Overflow.html"
Just add a '|' and one has trouble doing copy and paste backups to a spare drive. In the end you end up. chopping the file name in order to save it. Dozens/ perhaps hundreds of identical index.html or index.php are cluttering my drives.
The partial solution is to write you own CMS and use scripts to map all relevant files to a flat file database - then use fileName, size, mtime and md5 to get a unique Id for each file. Create a flat file index permitting 100k or 1000k records. The goal is to write once and use many times. So you need a real CMS you need a unique id based on content (eg index8765432.html) that goes in your files_archive. Ditto for the others. Then you can non-destructively symlink from the saved original html to the files_archive and just recreate the file using a php or alternative script if need be. Don't know if it will work as I'm at the same point you're at - maybe in a week will know for sure. The more useful approach is to have a top down structure based on your business or personal wants and related tasks. So your files might be organized top down but external ones bottom up to preserve the original content. My interest is in Web 3.0 services and the closer you get to machine to machine interaction the greater the need to structure the information. Maybe time to rethink the idea of bundling everything into a single file. So you have hundreds of main.css why bundle when a top down solution might let you modify one file instead of hundreds.