When I started working on my project I didn't really care so much about organizing my CSS file, but then I kept on adding ad-hoc selectors. Now I have 3048 lines of CSS code. I know I need to spend some time organizing this stuff and I know it will be much shorter once I do that since there's really a lot of redundancy.
But currently it's working fine (as it seems at least), and I just want to get the product out the door as soon as possible, so I was wondering, is this OK to have a long CSS like this and what are some of the critical drawbacks if any?
3048 lines isn't that bad, but it could be reduced; I'll give you benefit of the doubt and assume that you have a fairly complex site.
Super long CSS files start encountering problems in IE, which can only take a certain number of rules before it starts ignoring the rest.
Also, the longer the file, the more time it takes to parse - thus the longer it takes for your webpage to load.
Of course, with more rules comes more memory usage too.
That's about it, really.
It's okay if you don't mind gaining a reputation for being sloppy. That alone ought to be considered a "critical drawback". It takes no time at all to run it through a minimizer if you're sure that what you have currently is working.
As above, anybody looking at your website thinking about offering you a design opportunity might decide that you're sloppy and not make the offer.
It might take a few milliseconds longer for the client to parse.
It'll take half a second longer to load for people stuck on 768k DSL lines (there are still lots of those around).
Readability and the subsequent negatives.
Size. I mean since it is sent to the user, the longer it is the more time it will need (ofc with modern connection this is not a huge issue).
Consider Css language supersets like LessCss. It will dramatically decrease the complexity and increase productivity.
Also minifiers will help decreasing the size, with the trade off on readability, so minify only what is sent and not the code you are working on.
Related
First things first, yes I know the render the same color, my question is is a simple query about speed.
This is just a topic of interest regarding optimising page load speeds, but which of the options in the title will render faster (even if it is the tiniest difference)?
My thought process is that with the shorthand version (#fff) the browser will be tasked with assuming the rest of the hex-dec is fff. On the other side the longhand version might take longer or less time because of the extra explicit characters.
I figured someone might be able to shed some light on the topic.
#fff is fewer characters, which is generally better. Network latency and bandwidth matter more than processing time in most cases. For this reason, most CSS compressors will intelligently optimize to the #fff version. The network matters more than parsing speed in this case.
If you're concerned about parsing time, I doubt the difference between the 2 declarations accounts for even 0.005% of total parsing time. There are much bigger bottlenecks that dwarf any speed difference compared to parsing color declarations.
It depends on the implementation. One browser could take 100 times longer for the long version, and another browser would be the other way around.
Write your code so that it's readable, meaning easy to change going forward. If you want fast CSS consider using yui-compressor.
Q:
Lately ,i asked for testing a code ,,to detect the bugs and fix the problems ,,i find many problems ,, but the main problem here is the code it self ,, spaghetti code many many code lines and the tracing to fix problems is so difficult ,, plus some code is copied and pasted from the internet as it is without any modification,, no documentation is possible to this code,,the performance is so bad due to the heavy using of viewstate in every thing,, and it takes me a lot of time to fix the problems ,, and iam afraid of after all this time still other bugs may appear in the future .. how to handle this case ,,any advices concerning this issue will be a great help.
Start refactoring. This will not make only the code better readable it will also give you a btter understanding on how it works and what it does.
That's a very common situation - software was developed for some years and by many different developers and now you have to support it. You should fight the will to throw it away and rewrite the whole application - it is a big effort and chances are big that you will do many mistakes that are already fixed in bad code. See Joel article for more explanations.
In my experience the best way is to refactor the code. However it involves writing a lot of tests - unit, automated acceptance - and it will take almost the same time as rewriting, but it will be less pain
I think the mentality for working with spaghetti code is summed up quite well in Refactoring by Martin Fowler.
The picture that comes to my mind is surgery: The entire patient except the part to be operated on is draped. The draping leaves the surgeon with only a fixed set of variables. Now, we could have long arguments over whether this abstraction of a person to a lower left quadrant abdomen leads to good health care, but at the moment of surgery, I'm kind of glad the surgeon can focus
Basically what that means is start small, and update a single part of the code at a time. Little changes will add up to good, structured code in the future.
Of course, if there are major problems with the entire architecture, you may have to heed Cybernate's advice and start from scratch (unpleasant as it may be).
I would also write unit tests as well as refactoring as codymanix said. This does a couple things:
As you refactor, you will have some check that your refactoring won't break things.
Writing the tests is a good way of learning the domain knowledge embedded in the code.
Of course, there is an inherent catch-22 with adding unit tests to spaghetti code: it is usually hard to unit test, and you need to refactor it to be able to make it testable. But if you go bit by bit and start with the low-hanging fruit, usually you can make it readable.
Scrap the existing code and start afresh.
That involves lot less effort comparitively.
If that's not a possible option start with Refactoring using tool like ReSharper that will give a good start.
I'm considering micro-optimization of a huge CSS stylesheet and have a couple questions related to that:
Is lowercase better than uppercase for reducing file size?
Is background-position:right (5 chars); smaller than background-position:0 100%; (6 chars incl whitespace)?
Is there anything else that might help reduce file size? (Besides merging CSS selectors, properties etc ofcourse, that I'll have to do manually)
Thanks
You'd be much better serving the css gzipped, rather than worrying about things like that.
The character case doesn't matter, there's no difference in the number of bytes.
It depends on the browser:
The first statement is one byte shorter, but it also has a different meaning.
In general file size is not the only factor in the speed calculation. It also depends on how difficult it is for the browser to interpret it. So any overly clever CSS construct might squeeze some bytes out of the total size, but it's possible that the parsing process itself takes longer.
Back to your example: It is possible that second statement is a little bit slower, not just because of the extra space, but also the value consists of two tokens and depending on the internal representation of the background, the browser might have to do some unit conversions. On the other hand that keyword lookup can take a little more time, too, so it is really specific to a certain browser implementation. Most likely any gain would be in the range of nano-seconds and you shouldn't bother with this kind of optimization, as it most likely will not pay off. But if you really want to do this, you have to profile, i.e. measure the loading time.
In general it's enough to remove all comments and all unnecessary whitespace, but never do any development work on that "minified" source. Keep the original and recreate the compressed version when needed.
More information about this topic: www.minifycss.com and this.
Sounds like this is a lot of trouble, you're time might be best spent elsewhere if you're trying to get better performance. Are you aware of Steve Souders work on High Performance Websites? http://stevesouders.com/hpws/
I have recently been put in charge of debugging two different programs which will eventually need to share an XML parsing script, at the minimum. One was written with PureMVC, and another was built from scratch. While it made sence, originally, to write the one from scratch (it saved a good deal of memory, but the memory problems have since been resolved).
Porting the non-PureMVC application will take a good deal of time and effort which does not need to be used, but it will make documentation and code-sharing easier. It will also lower the overall learning curve. With that in mind:
1. What should be taken into account when considering whether it is best to move things to one standard?
(On a related note)
Some of the code is a little odd. Because the interpreting App had to convert commands from one syntax to another, it made sense to have an interpreter Object. Because there needed to be communication with the external environment, it made more sense to have one object interact with the environment, and for that to deal with the interpreter exclusively.
Effectively, an anti-Singleton was created. The object would only interface with the interpreter, and that's it. If a member of another class were to try to call one of its public methods, the object would raise an Exception.
There are better ways to accomplish this, but it is definitely a bit odd. There are more standard means of accomplishing the same thing, though they often involve the creation of classes or class files which are extraordinarily large. The only solution which I could find that was standards compliant would involve as much commenting and explanation as is currently required, if not more. Considering this:
2. If some code is quirky, but effective, is it better to change it to make it less quirky, even if it is made a more unwieldy?
In my opinion this type of refactoring is often not considered in schedules and can only be done when there is extra time.
More often than not, the criterion for shipping code is if it works, not necessarily if it's the best possible code solution.
So in answer to your question, I try and refactor when I have time to do so. Priority One still remains to produce a functional piece of code.
Things to take into account:
Does it work as-is?
As Galwegian notes, this is the only criterion in many shops. However, IMO just as important is:
How skilled are the programmers who are going to maintain it? Have they ever encountered nonstandard code? Compare the cost of their time to learn it (including the cost of delayed dot releases) to the cost of your time to refactor it.
If you're maintaining it, then instead consider:
How much time will dealing with the nonstandard code cost you over the intended lifecycle of the codebase (e.g., the time between now and when the whole thing is rewritten)?
That's hard to guess, but consider that many codebases FAR outlive the usefulness envisioned by their original authors. (Y2K anyone?) I've gradually developed a sense of when a refactoring is worthwhile and when it's not, mostly by erring on the side of "not" too often and regretting it later.
Only change it if you need to be making changes anyway. But less quirky is always a good goal. Most of the time spent on a particular piece of software is in maintenance, so if you can do something to make that easier, you'll be reducing the overall time spent on that piece of code. Nonetheless, don't change something if it's working and doesn't need any modifications.
If you have time, now. If you don't have time and it can be avoided, later.
Say you got an old codebase that you need to maintain and it's clearly not compliant with current standards. How would you distribute your efforts to keep the codebase backwards compatible while gaining standards compliance? What is important to you?
At my workplace we don't get any time to refactor things just because it makes the code better. There has to be a bug or a feature request from the customer.
Given that rule I do the following:
When I refactor:
If I have to make changes to an old project, I cleanup, refactor or rewrite the part that I'm changing.
Because I always have to understand the code to make the change in the first place, that's the best time to make other changes.
Also this is a good time to add missing unit tests, for your change and the existing code.
I always do the refactoring first and then make my changes, so that way I'm much more confident my changes didn't break anything.
Priorities:
The code parts which need refactoring the most, often contain the most bugs. Also we don't get any bug reports or feature requests for parts which are of no interest to the customer. Hence with this method prioritization comes automatically.
Circumenventing bad designs:
I guess there are tons of books about this, but here's what helped me the most:
I build facades:
Facades with a new good design which "quarantine" the existing bad code. I use this if I have to write new code and I have to reuse badly structured existing code, which I can't change due to time constraints.
Facades with the original bad design, which hide the new good code. I use this if I have to write new code which is used by the existing code.
The main purpose of those facades is dividing good from bad code and thus limiting dependencies and cascading effects.
I'd factor in time to fix up modules as I needed to work with them. An up-front rewrite/refactor isn't going to fly in any reasonable company, but it'll almost definitely be a nightmare to maintain a bodgy codebase without cleaning it up somewhat.