How much time a SOA composite should take in deployment
Scenario:
Composite having 5-6 BPEL process.
it remote deployment by JDeveleoper
Considering network connectivity is good
for me its taking 15-20 minutes...
It depends on a lot of things.
Taking that long is probably too long depending on the size of the processes.
Are you deploying to the same machine (I guess not by network conenctivity) which may indicate that your server does not have sufficient ram available and is doing a lot of disk swapping. If possible increase your available ram for the machine, or stop other processes such as OSB or BAM if you have them running on the same server.
You need to allow about 2gb per process (so AdminServer and SOA takes at least 4g to operate effectively).
Related
My OpWorks commands are taking anywhere between 8-15 minutes for a single instance. This is extremely painful for deployments which should really only take 2-3 minutes.
Are these timings usual for a PHP application with no extra deployment recipes?
Check that you are not using a t1.* or t2.* instance. Those instances can get really slow if you have depleted cpu credits (their CPU capacity are throttled).
Setup tasks performed by OpsWorks can deplete CPU capacity in those instances, before they are ready for service.
I have my IIS 6 running my website. It is on a Windows Server 2003 which has 4GB of RAM. I run SQL intensive code after the user submits a form (math statistics stuff). This process is not threaded (should it be, especially if 2 or more users run the same thing?). But my process seems to consume only a couple of GBs of memory and the server crawls. How do I get my IIS process to use nearly all the memory?
I see on other sites that its 2GB or 3GB allocated using boot.ini. But is there another way for the process to use memory? If I make it multithreaded, will there be a process for each thread?
If there is still memory free for IIS, it does not need more. Even if you give it more memory it will perform better. It is good to see some memory is not used and can be used for other processes as IIS. If you want to make is multi threading, it depends on what you do parallel if more memory is used, and if you gain any performance.
The basic here is to start with your requirements and see what peak use you can have. Then make a performance test to see if your machine can handle that load. To be sure you can handle some more do an other test to see the peek load your machine can handle. Then you will know if you have to invest any more time.
Check you database server to see if you bottleneck is not on that machine, most developers forget optimizing and maintaining their databases.
We are using ASP.NET but sometimes our applications using very much resources of CPU or RAM. I want to restrict resources for every virtual directory/web site and get alert when they reached the alert level.
I don't know I can do this for IIS but I wonder is this possible for other web servers like apache?
In Java, you cannot have the Java Virtual Machine enforce CPU time limits for certain threads: the best you can do is set the thread priority for any threads that you create. If there is no other work to be done, then a thread will burn as many CPU cycles as it can get unless is it blocking on something (I/O, synchronization monitor, etc.).
Using JMX, you can get some information about threads to possibly detect some runaway-thread situation, but you can't directly control the amount of CPU Time allowed for the thread.
If you are willing to re-architect your dynamic-content and other services, you could write them in such a way that they can support a "unit of work" that is somehow less than the full request's worth of work, and then you can have your request-processing thread execute a unit of work and then sleep an appropriate amount of time. You can lessen your CPU usage by doing this, but you will also certainly slow-down response times measurably.
Is there anything wrong with a fully-utilized CPU if your users are happy? Perhaps the real solution to your problem is more or bigger hardware.
I was just wondering why there is a need to go through all the trouble of creating distributed systems for massive parallel processing when, we could just create individual machines that support hundreds or thousands of cores/CPUs (or even GPGPUs) per machine?
So basically, why should you do parallel processing over a network of machines when it can rather be done at much lower cost and much more reliably on 1 machine that supports numerous cores?
I think it is simply cheaper. Those machines are available today, no need of inventing something new.
Next problem will be in complexity of the motherboard, imagine 10 CPUs on one MB - so much links! And if one of those CPUs dies, it could destroy whole machine..
You can write a program for GPGPU of course, but it is not as easy as write it for CPU. There are many limitations, eg. cache per core is really small if any, you can not communicate between cores (or you can, but it is very costly) etc.
Linking many computers is more stable, more scalable and cheaper due to long usage history.
What Petr said. As you add cores to an individual machine, communication overhead increases. If memory is shared between cores then the locking architecture for shared memory, and caching, generates increasingly large overheads.
If you don't have shared memory, then effectively you're working with different machines, even if they're all in the same box.
Hence it's usually better to develop very large scale apps without shared memory. And usually possible as well - although communications overhead is often still large.
Given that this is the case, there's little use for building highly multicore individual machines - though some do exist e.g. nvidia tesla...
How much traffic can one web server handle? What's the best way to see if we're beyond that?
I have an ASP.Net application that has a couple hundred users. Aspects of it are fairly processor intensive, but thus far we have done fine with only one server to run both SqlServer and the site. It's running Windows Server 2003, 3.4 GHz with 3.5 GB of RAM.
But lately I've started to notice slows at various times, and I was wondering what's the best way to determine if the server is overloaded by the usage of the application or if I need to do something to fix the application (I don't really want to spend a lot of time hunting down little optimizations if I'm just expecting too much from the box).
What you need is some info on Capacity Planning..
Capacity planning is the process of planning for growth and forecasting peak usage periods in order to meet system and application capacity requirements. It involves extensive performance testing to establish the application's resource utilization and transaction throughput under load. First, you measure the number of visitors the site currently receives and how much demand each user places on the server, and then you calculate the computing resources (CPU, RAM, disk space, and network bandwidth) that are necessary to support current and future usage levels.
If you have access to some profiling tools (such as those in the Team Suite edition of Visual Studio) you can try to set up a testing server and running some synthetic requests against it and see if there's any specific part of the code taking unreasonably long to run.
You should probably check some graphs of CPU and memory usage over time before doing this, to see if it can even be that. (A number alike to the UNIX "load average" could be a useful metric, I don't know if Windows has anything like it. Basically the average number of threads that want CPU time for every time-slice.)
Also check the obvious, that you aren't running out of bandwidth.
Measure, measure, measure. Rico Mariani always says this, and he's right.
Measure req/sec, RAM, CPU, Sessions, etc.
You may come up with a caching strategy (Output caching, data caching, caching dependencies, and so on.)
See also how your SQL Server is doing... indexes are a good place to start but not the only thing to look at..
On that hardware, a .NET application should be able to serve about 200-400 requests per second. If you have only a few hundred users, I doubt you are seeing even 2 requests per second, so I think you have a lot of capacity on that box, even with SQL server running.
Without know all of the details, I would say no, you will not see any performance improvement by adding servers.
By the way, if you're not using the Output Cache, I would start there.