I created a basic program that can make it easy to write css, make it more easy to debug and take less space.
A basic animation looks like this :
$moveLeftRight|l;r|5px;0|6px;None|
Basic css Style:
.black-square-with-rounded-corners{
bg:black;
h-w:5vh;
br:2em;
}
It can save a ton of space. When server gets request for it compiles it to css. I am new to back-end development, for this i wanted to know which one is more expensive and also is it a good idea?
Generally, space is cheaper than time in programming. That said, in networking, it can often by highly advantageous to limit the size of your files through compression of some sort. It sounds to me like this is what you are doing.
If your program essentially compresses CSS, then the metric I’d use isn’t space, but time. Specifically, load time. How fast does your web application load with and without your changes?
Ultimately “large” network calls are not actually that big for modern storage. 100MB will probably take a while to fully load from a network, but it won’t break my hard drive.
I doubt another DSL on top of CSS would make it easier to write and debug, so you are just trading the space cost for the development cost. Not to mention runtime compilation means you have to implement your own caching logic on the application level.
And it's not much of a gain, since CSS (even with bootstrap-tier bloat) is a drop in the ocean compared to megabytes of (potentially uncacheable) javascript brought by advertisement modules.
I have a simple basic question. Assume i have a large website like facebook, gmail and so on. this site probably save hundreds of gigabytes information every day. My question is how these sites save this large information in their database(Because of database capacity). Is there only one database? Is there only one server for this site? If there is another server and database, how they can communicate with each others?
They are clearly not using one computer...
The system behind such large sites are very complex, and distributed across datacenters. See - http://royal.pingdom.com/2010/06/18/the-software-behind-facebook/
Take a look at this site for info on various architectures employed by those sites (and this site): http://highscalability.com/all-time-favorites/
Most of these sites have gone with a strategy called NoSQL - that is they don't use traditional RDBM databases, but instead have created their own object relationship frameworks which have the ability to be persisted. This strategy works well at large scale as it drops a number of constraints which would seriously impact performance of traditional DB methods. However this generally comes at the cost of a lowering of reliability, which is generally considered acceptable for those sites' scenarios.
ps. if your question's general interest then no worries. If you're trying to build a highly scalable application hold off and consider it for a moment - are you going to be serving a significant percentage of the population of the world, or are you writing a site for maybe a few thousand users. If it's the latter you don't need Facebook style scaling; invest your effort and resources elsewhere. If it's the former start small then evolve your system, bringing in investment and expertise as your user base grows.
This is more of a general question about which direction would be a better investment for the company.
Our company's core business application is written in Visual FoxPro and is about 9+ years old. The database is huge 15+ gigs and the core logic is complex and to make matters worse the data model is terrible. The two guys that built it and have maintained it all these years are at least in their 50's, so needless to say retirement or possibly death could come within the next decade or so.
This VFP app drives all our core business functions and requires terminal services and citrix to access it from the outside world. Our web apps have to interface with it via ODBC and we are always having performance issues with it. The servers that run this system are also very old, like Win 2000 server and are falling apart.
Recently we have been having meetings about upgrading the systems that run this core app as well as other services like email and file storage. The biggest expense however is buying new server hardware, OS licensing, Terminal Services licensing, Citrix licensing etc to solve some performance and outside access issues we are currently having as well as just generally bringing us to date on our systems.
The price tag is going to be in the $55K to $65K price range. So as a web developer my point of view is that this is a huge waste of money! My solution would be to invest that money in rewriting the core system to run on the web based .Net platform. This would eliminate the need for Terminal Server and Citrix licensing along with the pricey hardware and configuration management to run it on. I don't see the point in investing this kind of money in an antiquated system that should be on it's way out anyways.
I am looking to get some convincing arguments as to why this is a waste of money. Hopefully there is someone here that has faced this type of situation before that can give me some points of view. The hardware upgrade seems to be the easiest road to take because they will just have a consultant come in and do it all. A software development project would take longer, require more resources and possibly cost a little more money.
The short-term rewrite vs. re-hardware argument cannot be won. Hardware and licenses are always cheaper than a rewrite. And hardware plus license seems to involve no risk.
You can't win on ROI argument. Unless the system is trivial and you are a genius, it will always cost $100K or more to rewrite an application that actually does something. Think multiple person years.
You might win the "technical debt" argument. Change is getting more and more complex, risky and expensive. The longer this code is perpetuated, the more risk and cost accumulates.
The real question is "start to fix now?" or "wait until it breaks and suffer later?" And that has no definite $-valued answer.
You can't compete on money, so you have to compete on risk, features, growth, maintainability, adaptability, standards compliance, security, creating unique value for each customer, etc., etc.
"We are now looking at a larger base of customers and more data". That's an argument you might be able to win.
(I'm over 50, I'm not planning on dying any time soon. That argument doesn't win hearts and minds. Unless they're over 80, you can't really use age except as way to get your argument ignored.)
Focus on the cost (and risk) of making changes.
Prove that you have a web-based solution that makes changes less costly and less risky.
Further, dig into what's there and find parts that can be replaced by a web framework. Code you don't write is cheaper to maintain the code you write.
Every project needs a cost-benefit analysis. If a $60,000 one-time investment will resolve all issues for the next 10 years, then it is (probably) far more economical than hiring a team of developers for even one year to build a newer, better system.
On the other hand, if it's already costing $50,000/year in maintenance and this capital cost is just to keep the system alive, and you'll need to spend another $60k in a few years from now, then it warrants a serious consideration with respect to a re-design.
Or you could take the middle road and start wrapping it up in something opaque like a web service, then gradually swapping out components with better (more efficient, more maintainable, etc.) internal components. Lots of companies go this route because it defers the up-front costs of a rewrite; if necessary you can defer IT resources elsewhere.
S.Lott is right, though - it's likely that you won't be able to compete on cost alone. You have to try to quantify the risks associated with these ancient systems - for example, how much it will cost the company to find and train qualified FoxPro developers if the original programmers decide to quit (or, to use the parlance of so many managers I've met, "run over by a bus")...
Just to add some further perspective to this: Before .NET (and for a few years after) I conducted most of my projects exclusively in Delphi. At the time, it really was a great choice for enterprise development. I was actually the person who didn't want to "upgrade." After a while, however, it became apparent to both myself and my higher-ups that this scared people outside the company.
Investors, auditors, everyone - they didn't like the idea that our core IT asset was done in some "obscure" language. Of course, Delphi wasn't/isn't really that obscure; there's a "delphi" tag here on SO with a count of 3340. But let's use SO as our example - here are the current counts:
c# - 57293
.net - 30577
asp.net - 26600
java - 31023
vb.net - 5996
delphi - 3340
foxpro - 69
vfp - 27
Let those numbers sink in for a while. Delphi, my tool of choice at the time, now has less than 10% of the representation of C#, and this made non-techies nervous. Foxpro/VFP is not even at 1%. I can't even remember how many times I had to answer questions like:
What happens if the lead developer (me) quits or gets run over by a bus?
How difficult/costly will it be to hire programmers in that field?
What if the vendor stops supporting it? (This almost happened)
What if we want to get outside help? Consultants? Security audits?
How easy will it be to get it to work with outside products?
Blah blah blah, worry worry worry, was how I felt at the time, and this was a product that wasn't really that obscure. In your case, we're talking about FoxPro here. FoxPro has gotten to be almost like COBOL; sure, it's still around, there are people out there who know it, but who starts a new project in FoxPro today? It's boring, it's downright ghetto. VB6 is starting to become ghetto, and VB/Access effectively replaced FoxPro so many years ago.
I'm obviously being slightly melodramatic here, but if I were you, this is the angle I would be taking. Forget about the short-term economics, forget about the age, and focus on the obscurity of the product. How many genuine, qualified responses do they think they'll get if they put a want-ad out for a FoxPro developer? What kind of pay would they have to offer for a position like that? What would the turnover be like? This may all seem remote if these two developers have been there for 20-odd years, but when you're running a multimillion-dollar business, you ought to know that it's never a good idea to stake your very survival on one or two employees - not if you can help it.
In general supplementing a poor system this tons of hardware is a bad plan, i would probably say that it#s better to rewrite, but it's hard to say without knowing the details.
Bear in mind that a decent rewrite should improve performance, reliability and maintainabilty so the potential savings are large and will only increase year on year, even if the inital investment is a little more.
In order to figure out if it is worthwhile, you have to calculate, in addition to the costs of a rewrite:
Documenting everything the system currently does, and reverse-engineering the requirements.
Writing unit and integration tests for everything that currently exists. This probably doensn't exist already, but should be.
Cost of maintaining the new system. The new system isn't going to eliminate maintenance costs, merely reduce it. How much will you save?
Cost of hardware for the new system. The new system is going to have to run on something.
Licensing costs for any software/etc. that are needed for the new system. Is everything going to be open source? Or are you going to need several Visual Studio Test Editions for your developers and testers?
Cost of hiring new personnel to do the development. In addition to the straight salary costs, there are office costs. The total might be $300,000, for say 3 developers, counting salary, office space, equipment, licenses, health care benefits.
Time horizon for the saving. The saving isn't going to occur immediately. It is going to occur in the future. In the meantime, they have to still pay for the licensing for the current system, because something has to do the job until the new system is put in place.
Cash flow issues. Because of the above, in the short term they are going to need more money to fund the development. The actual costs are higher, because they essentially have to get a loan, raise equity, or have an opportunity cost (they aer going to have to forego some other investment opportunity to pursue the rewrite).
Business risk. There may be a danger that the rewrite might cost more, work worse,
Two important numbers:
Number of "FoxPro" jobs listed in San Francisco's craigslist right now: 2.
Number of ".NET" jobs listed in San Francisco's craigslist right now: 252.
A lot of other points that have been mentioned are valid. However, you can spend as much as you want on hardware, but the fact is that if something breaks and you need help, you are going to have a heck of a time finding more people to help.
Sounds like a good time to start talking about a migration¹ to newer, better-supported technologies. (And in 10 years when .NET is old hat, you can do it all over again :)
[1] And evolve the system, don't rewrite it. I would guess your current system grew very organically based on needs at the time. There's no way that you'll be able to completely replace all of that (at least, not without a couple of years and a few miillion bucks).
As a historical VFP devloper (over 20yrs with Foxpro/VFP, and STILL have people asking me to write / update their systems with VFP, for a variety of reasons), its still very powerful. However, while researching and taking much of my OOP and development experience and working with .Net, I do find some things in .Net much easier, especially the strong type-casting. However, doing a basic report REQUIRES all strong type-casting to the database tables / structures / objects, and in many cases thus far, a PITA to do.
The price tag for a rewrite is always of significant consideration, but so too is the collapse of ANY system... regardless of VFP, VB, Access, or other. I would strongly suggest getting a consulting company in to help in the re-modeling of your system and maybe act as a project manager / mentor to your in-house staff of programmers who may be able to offer their talents even though it may require some training in the new development environment. This way, you can get a good basis of a strong talent in the language, yet keep some costs down by using your own programming staff -- yet you may need to hire supplemental programming staff. The learning curve from VFP to .Net is there, and can still be a head scratcher.
There are a variety of companies out there who were VFP specialists that have subsequently migrated their services to .Net world and may offer a perfect match for your organization having the historic knowledge and professional experience of BOTH worlds. I know they can act as mentors too for the development of such work.
You can only say it is a waste of money after you analyzed the ROI - it will depend heavily on how much does it cost to rewrite the system.
Classic mistake on JOS - "system is a mess, let's rewrite it".
It will be like looking at this old building and seeing a toothpick and wondering why it is there. You figure it isn't needed, and pull it out.
Suddenly the building collapses around your head :)
It might be a better idea to
Consider rewriting parts of the system for better maintainability.
Optimizing the system for better performance.
Abstracting the Foxpro specific parts, so it could be more easily converted to some other technology.
This incremental approach would reduce risk, and provide some short-term improvements.
There is no magic bullet here for the company. The only way to be sure is to take the hit on a new server to get the stability and speed benefits that brings to the existing business-critical software. Then once that is parked for a few years start re-engineering the thing on a different platform like .NET if that's what you want to do. Bearing in mind that you will have to migrate the VFP data into the new database structure at some point.
The studio I work at is currently developing the Tony Hawk XI website and I am responsible for the flash/AS3 development. As part of the pitch, I entered an augmented reality skateboard example to be shown which impressed the client very much.
After a few weeks of getting stronger with Papervision3D, and getting to know the Flar Toolkit, I have successfully imported md2 and dae files that load and interact with my custom marker.
Now it has come time to develop some of my own models; I will be using 3DSMAX. I want to know what the limitations are on things like poly-count, character rigging and animation, texturing, tricks for exporting and creating the proper format file and any other bits of information that may save me some serious headaches down the road.
Currently I have a Quake2 MD2 model, Ernie, pulled inside of a FlarToolkit demo here.
This is very low-poly and I was wondering how many polys could I expect to get away with being that today's machines are so much faster;
Brian Hodgeblog.hodgedev.com hodgedev.com
I've heard that 2000 polys is about the threshold for good performance. In practice though, its been hit or miss and a lot of things can have an impact. So far I've run into perfomance hits when using animated movieclip materials, animated materials with an alpha chanel and precise materieals.
Having to clip objects seems to be a double edged sword. In some cases, it will increase performance by a good deal, and in others (seems to be primarily when there are alot of polys on the edge of the viewport) it'll drop the framerate by a good 10-15 fps. So, I'd say the view you setup is something to think about as well.
For example, we have a model of an interior of a store with some shelves and products and customers walking around. In total we have just under 600 triangles (according to the StatsView, which you should check out if you haven't yet: org.papervision3d.view.stats.StatsView). On my computer, which is a new computer with a quad core it runs at a steady 30fps (which is where we want it), but on an old Dell XPS (Pentium 4) it runs between 20 and 30fps depending on what objects are being clipped, etc.
We try to reduce the poly count and texture creatively to fix as many of the performance issues as possible. Unfortunatley our minimum specs are really low, so we need to do alot to get it to run well.
Edit:
Another thing we're doing is swapping out less detailed models for higher detailed ones when zoomed in. If you aren't zooming at all, than this probably won't help.
Hope that helps a bit.
I was trying to find online some exercises to practice scaling techniques (memchached, SQL Optimization, sharding dbs), but I could only find descriptions of these techniques, not any project on which to try them.
This link with slides on scaling techniques, is an interesting one, as it sums up some tools to achieve scalability quite well.
Is there a projecteuler kind of site for these kind of activities? Or at least some excercises (such as a downloadable ASP.NET/PHP site with obvious slowdowns, concurrency issues, subtle bugs) for people to try and learn how to fight this issue?
I find that the site High Scalability has some nice insights.
It might be interesting to hack at Wordpress. Their caching plugins take care of a lot of scaling issues but it would be cool to write your own plugin or hack at the source to cut down on SQL queries or to cache static pages. If you come up with something, make sure to let the rest of the community know!
George's slides are definitely a good basis to work from. Note that he is not talking about a specific technique or technology; rather he's discussing more general architectural and design decisions that will help your application scale as a whole.
I personally think this sort of high-level thinking would be much more valuable than individual optimisation techniques. Perhaps you could take a well known web application and hack it until it scales well across multiple machines? A cluster of lots of cheap, low-power EC2 machines could be really useful here. Getting an existing or new application to run properly across a number of machines would be a fantastic exercise.
Counter-intuitively, rather than getting as much as possible to run on a single machine, I'd say it would be much more educational to get the same application running on several machines.
Once you have that, it makes sense to move onto more specific improvements like a separate static content tier, memcached, DB sharding, batch operations and so on.
In terms of specific projects to work on, how about cloning Twitter, Flickr or The Pirate Bay. They've all had performance and scaling challenges in the past.