Multiplayer billiards game physics simulation [closed] - networking

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I’m building an online multiplayer billiards game and I’m struggling to think of the best approach to multiplayer physics simulation. I have thought of a three possible scenarios, each having its own advantages and disadvantages and I would like to hear some opinion of those that either have implemented something similar already or have experience in multiplayer online games.
1st Scenario: Physics simulation on the clients: The player in turn to take a shot sends the angle of the shot and power to the server, and the sever updates all clients with these values so they can simulate the shot independently.
Advantages:
Low server overheat
Disadvantages:
Problems with synchronization. Clients must simulate the exact simulation regardless of their frame rate. (Possible to solve with some clever algorithm like one described here)
Cheating. Players can cheat by tweaking the physics engine. (Possible to determine when making a comparison at the end of the shot with other players ball positions. If only two players are at the table (i.e. not spectaculars) then who the cheater is?)
2nd Scenario:
Physics simulation on one (i.e. “master”) client (e.g. who ever takes the shot) and then broadcast each physics step to everyone else.
Advantages:
No problems with synchronization.
Disadvantages:
1.Server overheat. Each time step the “master” client will be sending the coordinates of all balls to the server, and the server will have to broadcast them to everyone else in the room.
2. Cheating by the “master” player is still possible.
3rd Scenario: The physics will be simulated on the server.
Advantage:
No possibility to cheat as the simulation is run independent of clients.
No synchronization issues, one simulation means everyone will see the same result (event if not at the same time because of network lag)
Disadvantages:
Huge server overload. Not only the server will have to calculate physics 30/60 times every second for every table (there might be 100 tables at the same time) but also will have to broadcast all the coordinates to everyone in the rooms.
EDIT
Some of similar games to the one I’m making, in case someone is familiar with how they have overcame these issues:
http://apps.facebook.com/flash-pool/
http://www.thesnookerclub.com/download.php
http://gamezer.com/billiards/

I think that the 3rd one is the best.
But you can make it even better if you compute all the collisions and movement in the server before sending them to the clients (every collisions and movements, etc...) then clients just have to "execute" them.
If you do that you will send the informations only once per shot, that will greatly reduce the network issue.
And as JimR wrote, you should use velocity or movement equation instead of doing small step by small step incremental simulation (like the Runge-Kutta method)
The information that the server send to the client would look like this:
Blackball hit->move from x1,y1 to x2,y2 until time t1
Collision between blackball and ball 6 at time t1
Ball 6 -> move from x3,y3 to x4,y4 until time t3
Blackball -> move from x5,y5 to x6,y6 until time t4
Collision between Ball 6 and Ball 4 at time t3
and so on until nothings move anymore
Also, you are likely to need a bunch of classes representing the differents physics equations and have a way to serialize them to send them to the clients (Java or C# can serialize objects easily).

Related

What's the best method to implement multiplayer on a Unity Billiard game?

I'm making an online billiard game. I've finished all the mechanics for single player, online account system, online inventory system etc. Everything's fine but I've gotten to the hardest part now, the multiplayer. I tried syncing the position of each ball every frame but the movement wasn't smooth at all, the balls would move back and forth and it looked "bad" in general. Does anyone have any solution for this ? How do other billiard games like the one in Miniclip do it, I'm honestly stuck here and frustrated as it took me a while to learn Photon networking then to find out it's not that good at handling the physics synchronization.
Would uNet be a better choice here ?
I appreciate any help you give me. Thank you!
This is done with PUN already: https://www.assetstore.unity3d.com/en/#!/content/15802
You can try to play with synchronization settings or implement custom OnPhotonSerializeView (see DemoSynchronization in PUN package). Make sure that physic simulation disabled on synchronized clients. See DemoBoxes for physics simulation sample.
Or, if balls can move along lines only, do not send all positions every frame. Send positions and velocities only when balls colliding and do simple velocity simulation between. This can work even with more comprehensive physics but general rule is the same: synchronize it at key points. Of course this is not as simple as automatic synchronization.
Also note that classic billiard is turnbased game and you do not have all the complexity of players interaction. In worst case you can 'record' simulation on current player client and 'playback' it on others.

Storm real-time processing: What if it goes down? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Storm is a free and open source distributed realtime computation system. It receives streams of data and does processing on it. What if Storm goes down and part of the data never goes through it which means that calculations would not be in sync?
How can Storm solve this problem? If it can't, how could one solve this problem?
A similar question would be: How can I read old data that existed before Storm was added?
How can I read old data that existed before Storm was added?
The data must be stored somewhere (say, HDFS). You write a Spout which accepts data from some transport (say, JMS). Then, you would need to write replay code to read the appropriate data from HDFS, put it on a JMS channel, and Storm would deal with it. The trick is knowing how far back you need to go in the data, which is probably the responsibility of an external system, like the replay code. This replay code may consult a database, or the results of Storm's processing, whatever they may be.
Overall, the 'what if it goes down' question depends on what type of calculations you are doing, and if your system deals with back pressure. In short, much of the durability of your streams are dependent on the messaging/transport mechanism that delivers to Storm.
Example: If you need to simply tranform (xslt) individual events, then there is no real-time failure, and no state issues if Storm goes down. You simply start back up and resume processing.
The system that provides your feed may need to handle the back pressure. Messaging transports like Kafka can handle durable messaging, and allow Storm to resume where it left off.
The specific use case that results in "calculations would not be in sync" would need to be expounded upon to provide a better, more specific answer.

Is it possible create war shooter(like Call of duty,Medal of honor etc.) with Udk in one year? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I don't know UDK,but I'm interested is it possible create war shooter(like Call of Duty,Medal of honor etc.) with Udk in one year with two programmers and one 3D designer?I'm not say as good as Call of Duty or Medal but close to them.
As someone working in the games industry, I would say no. There is a good reason for multiple year development cycles with those teams. Call of Duty games take 2 years to develop even with large teams like Treyarch (around 250 employees).
It's not simply creating tons and tons of content, but making it all fit together, and all the dependencies between things going in. Like you can't start scripting up encounters until you have a level blocked out, and you can't start adding effects to weapons if the weapon isn't in game. You can't localize the text or dialog until it's finalized. A lot of that also depends on new tech from the engineering team. Etc etc etc.
That's all independent of what engine you choose to use.
Yes.
It is an extremely ambitious goal but if you are all working full time on this you could get something of middling quality in that time. It may be buggy and/or unoriginal but it could be done.
No chance of it being anywhere near the quality of the mentioned games though.
Umm of course you can do it, 3 ppl army to build a whole game, well... I will suggest you one thing, scrap cinematics, get rid off nonsense levels and cut to the cheese, thats whats most of the hard gamers look for, no fancy cinematics, straight to the action point.
In my opinion a good multiplayer game mode will do the trick as ,,, per now.. and wait fot the feedback, maybe if your game is good enough you can hire more staff, and work on the 1 player side. So cut to the cheese, and straight to the action, thats my opinion according to the amount of ppl involved in your game development. And many guys here will tell you,, ohh noo you cant get the same level of quality as COD, gee man of course you cant, thats obvious,,3 against 100's no chance, but what it is important is the idea, you have a good idea, put it on paper and then on your game. Dont go crazy and try to create the whole thing, theres no physical time for you 3 guys working 24/7 to finish this off in a year. So, start little with good visual texturing, then go from there. I hope you guys do well.
Nowadays that might be almost impossible. I work at Treyarch, and while I can't say much details, here are the basics:
You need QA - just this, and your quest is over. And this is only to test your game play, logic, whether a level can be completed, etc.
You need even more QA, for things you've never expected - certification (Xbox, PS3, Nintendo), localization (language, censorship, etc)
You need production team. Very motivated and goal oriented.
You need programmers, artists, animators, builders, scripters.
You put at least one person behind every profession, and you need at minimum 10 people. That would've been fine 15 years ago. Unfortunately not now.
People can still do it. There's been some game development of 9 people of sci-fi robots game using UDK (can't remember the name of the game) - but this means engine is provided, and the people have very good motivation, and probably are very good themselves.
Of course you can...the art assets won't be as good though
If you were to purchase your assets (3D models, open source engine, pre-made textures) then I suspect this could be all put together if you guys coded in only the fundamental parts of the game that make it yours.
The question is how much do you want to have commercialy bought versus homegrown and implemented

Planning and coping with deadlines in Scrum [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
From wikipedia:
During each “sprint”, typically a two
to four week period (with the length
being decided by the team), the team
creates a potentially shippable
product increment (for example,
working and tested software). The set
of features that go into a sprint come
from the product “backlog,” which is a
prioritized set of high level
requirements of work to be done. Which
backlog items go into the sprint is
determined during the sprint planning
meeting. During this meeting, the
Product Owner informs the team of the
items in the product backlog that he
or she wants completed. The team then
determines how much of this they can
commit to complete during the next
sprint. During a sprint, no one is
allowed to change the sprint backlog,
which means that the requirements are
frozen for that sprint. After a sprint
is completed, the team demonstrates
the use of the software.
I was reading this and two questions immediately popped into my head:
1) If a sprint is only a couple of weeks long, decided in a single meeting, how can you accurately plan what can be achieved? High-level tasks can't be estimated accurately in my experience, and can easily double what seems reasonable. As a developer, I hate being pushed into committing what I can deliver in the next month based on a set of customer requirements. This goes against everything I know about generating reliable estimates rather than having to roughly estimate and then double it!
2) Since the requirements are supposed to be locked and a deliverable product be available at the end, what happens when something does take twice as long? What if this feature is only 1/2 done at the end of the sprint?
The wiki article goes on to talk about Sprint planning, where things are broken down into much smaller tasks for estimation (<1 day) but this is after the Sprint features are already planned and the release agreed, isn't it? Kind of like a salesman promising something without consulting the developers.
BTW:
Although the word is not an acronym,
some companies implementing the
process have been known to spell it
with capital letters as SCRUM. This
may be due to one of Ken Schwaber’s
early papers, which capitalized SCRUM
in the title.
You are supposed to use the velocity to plan the next sprint. The velocity neatly handles the fact that your estimates are wrong, but they are consistently wrong. Also note that stories are supposed to be short, I'd say maximum 2-3 days. Stories that are bigger than that should be broken down into smaller stories.
If one story is not completed as planned, then your velocity goes down and you wont be able to take on as much work in the next iteration.
The wiki article goes on to talk about Sprint planning, where things are broken down into much smaller tasks for estimation (<1 day) but this is after the Sprint features are already planned and the release agreed, isn't it?
Wrong, they are done in the same meeting. The sprint stories are not agreed upon until everyone leaves the sprint planning meeting. Whatever questions you need to ask the PO to enable your commitment to the stories; you do before or in the SP meeting
Since the requirements are supposed to be locked and a deliverable product available at the end, what happens when something does take twice as long? What if this feature is only 1/2 done at the end of the sprint
The functional objective of the story is locked, not the implementation details. The details come out in conversation during the sprint. Any details considered to large to be contained in the current sprint scope are put back on the Backlog for later prioritization. Remember, you are building incremental products here. Its like peeling an onion. The story must be satisfied and the code must be working at the end of the sprint. That doesn't mean the whole feature is entirely complete and releasable to a user.
If a sprint is only a couple of weeks, decided in a single meeting, how can you accurately plan what can be achieved? High-level tasks can't be estimated accurately in my experience, and can easily double what seems reasonable. As a developer, I hate being pushed into committing what I can deliver in the next month based on a set of customer requirements, this goes against everything I know about generating reliable estimates rather than having to roughly estimate and then double it!
You are correct here, you can't estimate accurately. Scrum embraces this fact and uses velocity, trending, averaging, and gut-feel to get close. If you don't come to grips with forgetting about accurate hour increment measurements you won't ever feel comfortable with scrum.
In answer to #2, if a feature isn't done at the end of the sprint, you don't deliver it. You may be able to deliver part of it, and if you can do so in a useful fashion, do so. But if you can't deliver it this sprint, remove it, and deliver it in the next sprint.
In answer to #1, there are numerous ways to try to improve the accuracy of your estimates. You might use function-point analysis or just a simple exercise where the entire project team takes the list of tasks separately and comes up with their own estimates for each task, then reviews each task and shares their estimates, and discusses the reasons why (for instance) Bob's estimate for this task is 8h and Tina's is 16h. The team figures out who is right (hopefully) or comes to consensus, and uses that as the estimate.
Over time, you'll come to learn which of your estimates tend to be overly optimistic, and which are overly pessimistic, and thereby improve your ability to estimate your own tasks.
The burn-down chart can really help you here. It is an early warning system for the whole project team, to let you all know when one or more people are falling behind. Then the team can reorganize to help make the sprint commitment if necessary, or cancel the sprint due to unforeseen circumstances, and kick off a new sprint with their improved understanding of the problem space.
Finally, you might consider that statistics are on your side. If you overestimate 10% of the tasks, and underestimate 10% of the tasks, you'll probably be OK.
When we did SCRUM in an earlier project, we first agreed on a rough sprint plan including high-level features (stories), then refined the plan by breaking each of these down to groups of concrete tasks of preferably 1 days max length, estimating each task. After this we often found out that the original plan was overcommitted to some extent (typically because we didn't take into account that developing a story includes unit testing, code review and documentation too), so we adjusted it accordingly. Btw we used "estimation poker": each member chose a card with a number on it (work hours/days) and everyone showed his card to the count of 3. If the numbers differed a lot, we briefly discussed why, and then had a new round until we reached near consensus.
Note also that estimation is very domain and technology dependent. In that project, we understood both fairly well, and we were building a new app from scratch, so our estimations were fairly accurate. In my current project we are working with legacy code, in a domain we don't quite understand yet, so our estimates are often wildly out of range.
As the project rolls on, estimates are gradually getting better (related to the fact that more and more risks and tricky issues are being resolved, and the team's domain expertise grows), so the velocity of the team can actually grow over time.
in answer to #1, I'm not sure I agree with some built in assumptions in the question. In my experience, Agile (including Scrum) is not about time estimates. The whole idea is to move away from that and instead move to a system where you have a known velocity and specific sprint times. For instance, you release every 2 weeks (a good sprint time) with some new code. You see how many story points (not time units, but story points) you get done over a sprint, and then another, and after you've done a few sprints you know your rough velocity (ie, how many story points you can do on average per sprint).
The idea is that the customer gets continuous updates to the application as each sprint is finished and can see constant progress. They know which items are scheduled to come in future sprints but they are aware that if something slips (because of an incorrect difficulty rating, ie. the story point estimation, or an outside problem) it will instead come in the next sprint and everything else will be moved out a little.
So its not about developing the software based on some seemingly arbitrary estimation. Instead, its about planning what functionality or features you want and assigning difficulty (story points) to those features (relative to the other features) and working through them to determine a velocity. Only then can rough estimates be obtained. Once the average velocity is known, we can make some rough guesses about time frames. However, even these should be considered rough approximations because again, its not about time, its about constant feature releases. Clearly, this mindset must exists with EVERYONE on the team, not just the engineers.
Here is a link (wikipedia) that goes into it a bit more.
Anyway, I hope this helps you, good luck!
It's actually quite simple:
You priorize the Tasks and if you see that you don't have enough time then the low-priority tasks simply get dropped or moved into the next sprint.
The Project Owner decides what he wants and sets the priorities and you develop following that order. You should have a useable product at the end of the sprint, not the fully-featured product.

exploring mathematics of/in computer science [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 13 years ago.
I have been working for two years in software industry. Some things that have puzzled me are as follows:
There is lack of application of mathematics in current software industry.
e.g.: When a mechanical engineer designs an electricity pole , he computes the stress on the foundation by using stress analysis techniques(read mathematical equations) to determine exactly what kind and what grade of steel should be used, but when a software developer deploys a web server application he just guesses on the estimated load on his server and leaves the rest on luck and god, there is nothing that he can use to simulate mathematically to answer his problem (my observation).
Great softwares (wind tunnel simulators etc) and computing programs(like matlab etc) are there to simulate real world problems (because they have their mathematical equations) but we in software industry still are clueless about how much actual resources in terms of memory , computing resources, clock speed , RAM etc would be needed when our server side application would actually be deployed. we just keep on guessing about the solution and solve such problem's by more or less 'hit and trial' (my observation).
Programming is done on API's, whether in c, C#, java etc. We are never able to exactly check the complexity of our code and hence efficiency because somewhere we are using an abstraction written by someone else whose source code we either don't have or we didn't have the time to check it.
e.g. If I write a simple client server app in C# or java, I am never able to calculate beforehand how much the efficiency and complexity of this code is going to be or what would be the minimum this whole client server app will require (my observation).
Load balancing and scalability analysis are just too vague and are merely solved by adding more nodes if requests on the server are increasing (my observation).
Please post answers to any of my above puzzling observations.
Please post relevant references also.
I would be happy if someone proves me wrong and shows the right way.
Thanks in advance
Ashish
I think there are a few reasons for this. One is that in many cases, simply getting the job done is more important than making it perform as well as possible. A lot of software that I write is stuff that will only be run on occasion on small data sets, or stuff where the performance implications are pretty trivial (it's a loop that does a fixed computation on each element, so it's trivially O(n)). For most of this software, it would be silly to spend time analyzing the running time in detail.
Another reason is that software is very easy to change later on. Once you've built a bridge, any fixes can be incredibly expensive, so it's good to be very sure of your design before you do it. In software, unless you've made a horrible architectural choice early on, you can generally find and optimize performance hot spots once you have some more real-world data about how it performs. In order to avoid those horrible architectural choices, you can generally do approximate, back-of-the-envelope calculations (make sure you're not using an O(2^n) algorithm on a large data set, and estimate within a factor of 10 or so how many resources you'll need for the heaviest load you expect). These do require some analysis, but usually it can be pretty quick and off the cuff.
And then there are cases in which you really, really do need to squeeze the ultimate performance out of a system. In these case, people frequently do actually sit down, work out the performance characteristics of the systems they are working with, and do very detailed analyses. See, for instance, Ulrich Drepper's very impressive paper What Every Programmer Should Know About Memory (pdf).
Think about the engineering sciences, they all have very well defined laws that are applicable to the design, and building of physical items, things like gravity, strength of materials, etc. Whereas in Computer science, there are not many well defined laws when it comes to building an application against.
I can think of many different ways to write a simple hello world program that would satisfy the requirment. However, if I have to build an electricity pole, I am severely constrained by the physical world, and the requirements of the pole.
Point by point
An electricity pole has to withstand the weather, a load, corrosion etc and these can be quantified and modelled. I can't quantify my website launch success, or how my database will grow.
Premature optimisation? Good enough is exactly that, fix it when needed. If you're a vendor, you've no idea what will be running your code in real life or how it's configured. Again you can't quantify it.
Premature optimisation
See point 1. I can add as needed.
Carrying on... even engineers bollix up. Collapsing bridges, blackout, car safety recalls, "wrong kind of snow" etc etc. Shall we change the question to "why don't engineers use more empirical observations?"
The answer to most of these is in order to have meaningful measurements (and accepted equations, limits, tolerances etc) that you have in real-world engineering you first need a way of measuring what it is that you are looking at.
Most of these things simply can't be measured easily - Software complexity is a classic, what is "complex"? How do you look at source code and decide if it is complex or not? McCabe's Cyclomatic Complexity is the closest standard we have for this but it's still basically just counting branch instructions in methods.
There is little math in software programs because the programs themselves are the equation. It is not possible to figure out the equation before it is actually run. Engineers use simple (and very complex) programs to simulate what happens in the real world. It is very difficult to simulate a simulator. additionally, many problems in computer science don't even have an answer mathematically: see traveling salesman.
Much of the mathematics is also built into languages and libraries. If you use a hash table to store data, you know to find any element can be done in constant time O(1), no matter how many elements are in the hash table. If you store it in a binary tree, it will take longer depending on the number of elements [0(n^2) if i remember correctly].
The problem is that software talks with other software, written by humans. The engineering examples you describe deal with physical phenomenon, which are constant. If I develop an electrical simulator, everyone in the world can use it. If I develop a protocol X simulator for my server, it will help me, but probably won't be worth the work.
No one can design a system from scratch and people that write semi-common libraries generally have plenty of enhancements and extensions to work on rather than writing a simulator for their library.
If you want a network traffic simulator you can find one, but it will tell you little about your server load because the traffic won't be using the protocol your server understands. Every server is going to see completely different sets of traffic.
There is lack of application of mathematics in current software industry.
e.g.: When a mechanical engineer designs an electricity pole , he computes the stress on the foundation by using stress analysis techniques(read mathematical equations) to determine exactly what kind and what grade of steel should be used, but when a software developer deploys a web server application he just guesses on the estimated load on his server and leaves the rest on luck and god, there is nothing that he can use to simulate mathematically to answer his problem (my observation).
I wouldn't say that luck or god are always the basis for load estimation. Often realistic data can be had.
It's also not true that there are no mathematical techniques to answer the question. Operations research and queuing theory can be applied to good advantage.
The real problem is that mechanical engineering is based on laws of physics and a foundation of thousands of years worth of empirical and scientific investigation. Computer science is only as old as me. Computer science will be much further along by the time your children and grandchildren apply the best practices of their day.
An MIT EE grad would not have this problem ;)
My thoughts:
Some people do actually apply math to estimate server load. The equations are very complex for many applications and many people resort to rules of thumb, guess and adjust or similar strategies. Some applications (real time applications with a high penalty for failure... weapons systems, powerplant control applications, avionics) carefully compute the required resources and ensure that they will be available at runtime.
Same as 1.
Engineers also use components provided by others, with a published interface. Think of electrical engineering. You don't usually care about the internals of a transistor, just it's interface and operating specifications. If you wanted to examine every component you use in all of it's complexity, you would be limited to what one single person can accomplish.
I have written fairly complex algorithms that determine what to scale when based on various factors such as memory consumption, CPU load, and IO. However, the most efficient solution is sometimes to measure and adjust. This is especially true if the application is complex and evolves over time. The effort invested in modeling the application mathematically (and updating that model over time) may be more than the cost of lost efficiency by try and correct approaches. Eventually, I could envision a better understanding of the correlation between code and the environment it executes in could lead to systems that predict resource usage ahead of time. Since we don't have that today, many organizations load test code under a wide range of conditions to empirically gather that information.
Software engineering are very different from the typical fields of engineering. Where "normal" engineering are bound to the context of our physical universe and the laws in it we've identified, there's no such boundary in the software world.
Producing software are usually an attempt to mirror a subset of the real-life world into a virtual reality. Here we define the laws ourselves, by only picking the ones we need and by making them just as complex as we need. Because of this fundamental difference, you need to look at the problem-solving from a different perspective. We try to make abstractions to make complex parts less complex, just like we teach kids that yellow + blue = green, when it's really the wavelength of the light that bounces on the paper that changes.
Once in a while we are bound by different laws though. Stuff like Big-O, Test-coverage, complexity-measurements, UI-measurements and the likes are all models of mathematic laws. If you look into digital signal processing, realtime programming and functional programming, you'll often find that the programmers use equations to figure out a way to do what they want. - but these techniques aren't really (to some extend) useful to create a virtual domain, that can solve complex logic, branching and interact with a user.
The reasons why wind tunnels, simulations, etc.. are needed in the engineering world is that it's much cheaper to build a scaled down prototype, than to build the full thing and then test it. Also, a failed test on a full scale bridge is destructive - you have to build a new one for each test.
In software, once you have a prototype that passes the requirements, you have the full-blown solution. there is no need to build the full-scale version. You should be running load simulations against your server apps before going live with them, but since loads are variable and often unpredictable, you're better off building the app to be able to scale to any size by adding more hardware than to target a certain load. Bridge builders have a given target load they need to handle. If they had a predicted usage of 10 cars at any given time, and then a year later the bridge's popularity soared to 1,000,000 cars per day, nobody would be surprised if it failed. But with web applications, that's the kind of scaling that has to happen.
1) Most business logic is usually broken down into decision trees. This is the "equation" that should be proofed with unit tests. If you put in x then you should get y, I don't see any issue there.
2,3) Profiling can provide some insight as to where performance issues lie. For the most part you can't say that software will take x cycles because that will change over time (ie database becomes larger, OS starts going funky, etc). Bridges for instance require constant maintenance, you can't slap one up and expect it to last 50 years without spending time and money on it. Using libraries is like not trying to figure out pi every time you want to find the circumference of a circle. It has already been proven (and is cost effective) so there is no need to reinvent the wheel.
4) For the most part web applications scale well horizontally (multiple machines). Vertical (multithreading/multiprocess) scaling tends to be much more complex. Adding machines is usually relatively easy and cost effective and avoid some bottlenecks that become limited rather easily (disk I/O). Also load balancing can eliminate the possibility of one machine being a central point of failure.
It isn't exactly rocket science as you never know how many consumers will come to the serving line. Generally it is better to have too much capacity then to have errors, pissed of customers and someone (generally your boss) chewing your hide out.

Resources