When setting up a load balanced Peopletools/Peoplesoft environment( 2 web, 2 app, 2 process schedulers, all on separate hardware), how do you configure the report repository/repositories? The intent is to have two totally indepenent stacks but also allow reports run on one side to be available from the other.
I have a single report repository that is on a shared area accessible to both sides/sets of the stack. I've never tried having 2 report repositories, but it sounds like a replication nightmare. If it can be done I'd be interested to hear more.
This is not necessary, but required in case you have load on report nodes.
This can be achieved by creating two report nodes with differrent domain names and needs to be configured through web server config file. Please don't forget to make one report node master and second one slave.
Thanks
Related
I need to run 4 background gobs for cleaning temp files and proccessing some files. I have chosen Quart.net for the job.
I have a Asp.Net website, which accepts uploading files that will be processed by the Quartz Jobs at night.
First i thought about making a console application for the Quartz jobs, keeping the website and the jobs totally decoupled.
But then, i've seen that i will need some config values (connectionstring and paths to files) that are on the asp.net web.config. So a question came to my mind:
Should i run the jobs through the asp.net instance or should i do this on a console application?
Furthermore, i want that when the Quartz jobs start running, the website show a special page (like "We are processing the files...).
What i care the most is the performance, i don't want the website to be affected by the Quartz jobs, neither the jobs' performance affected by the website.
So, what should i do? Have you done something like this and can give me an advice?
Should i run the jobs through the asp.net instance or should i do this on a console application?
If you want to have to manually trigger them each night, sure. But a console application using the host system's task scheduler seems like a more automated solution. A web application is more of a request/response system, it's not really suited for periodic or long-running actions. Scheduling some sort of background operation on the host, such as a scheduled console application or a windows service, would serve that purpose better.
Note that if it truly needs to be unattended and run even when there's nobody logged in to the server console, a windows service may be a more ideal approach than a console application.
i've seen that i will need some config values (connectionstring and paths to files) that are on the asp.net web.config
Console application have App.config files which serve the same purpose. You can use that.
i want that when the Quartz jobs start running, the website show a special page
You definitely want to keep the two de-coupled. But you may be able to accomplish this easily enough. Maybe have some sort of status flag in the database which indicates if any particular record is "currently being processed". The website can simply look for any records with that flag when a page loads and display that message.
There are likely a couple of different ways to synchronize status here, it doesn't really matter what you choose. What does matter is that the systems remain decoupled and that any status which is statically persisted is handled somewhat carefully to avoid an errant process from leaving an incorrect status. (For example, a background task sets a status of "processing" and then fails in some way. The website would forever indicate that it's processing.)
I installed Wordpress using EC2. I created a Load Balancer by creating image (AMI) then adding both Wordpress1 and Wordpress2 on Load Balancer. But I'm still getting database error and have to restart the instances. If I'd like to make 4 instances as Load balancer, are the steps the same? because I saw a "Number of Instances" option when I launched an AMI. Default value is 1. I'm not sure if I should enter 3 or 4 to create multiple instances in one click.
Also, if I update on Wordpress1 instance, will the updates show if the domain will load Wordpress2 instance?
If you want to launch multiple instances and a database etc, you should consider using
AWS CloudFormation. CloudFormation is just a big json string that contains the configuration of your environment, including the servers, autoscaling, access, register with the loadbalancer, etc.
See http://aws.amazon.com/en/cloudformation/ for more details.
There is already an example template for wordpress including a database and autoscaling groups (example wordpress template)
However like datasage mentioned you will need to make adjustments to wordpress to make it working in a multiserver environment.
The "problem" with multiserver environments is that if you upload a file or in your case upgrade wordpress, it will only happen on one server, which could be terminated at any point. Furthermore the upgrade could contain changes in the database structure and then its getting complicated.
If you are building something in the cloud you should always keep in mind that every service you build, in you case the frontend webservers and the database should be allowed to fail without interrupting your service.
Another point is, that you should avoid doing stuff by hand, automation is the key.
An environment where you need to link your server by hand to a loadbalancer is not very useful in the cloud where servers are continuously terminated, rebooted and exchanged.
For you webservers you can use "autoscaling groups" to get this behavior.
If you are using autoscaling groups and a server is terminated or considered unhealthy, a new one will be started automatically and registered with the loadbalancer as soon as it is considered as healthy.
For your database amazon offers for rds multi AZ environments which provide a automatic failover.
Applying upgrades in the cloud can be a tricky and there are different ways to do this. for example using a shared NFS mount with the code base, git deployments or the way you already started: creating a new AMI for every upgrade and then replacing the servers. There are a lot options and they all have their benefits and drawbacks.
As far as i understand you use-case the cloud is maybe not the right choice at the moment.
Normally hosting a small business in the cloud is much more expensive than using a single server. You will only save money if you need like 20 servers in the evening and only 2 or 3 for the rest of the day. Of course there are a lot more points to consider but that would be to much.
Autoscaling in ec2 is horizontal scaling. Which means that instances are added as your infrastructure scales up. This in contrast to vertical scaling where the a single instance is given more resources.
In order to use this effectively, each instance cannot store data that may be needed by other instances. The most common requirement is the database which will need to exist on its own instance outside of the autoscaled instances. You could use RDS for this.
Wordpress also stores file uploads, plugins and themes within the wp-content folder within the wordpress install. By default, if you upload a file, it will be stored on one instance but not any of the others. You could store everything on an NFS volume shared by one of the instances, or you could try a plugin like this: http://wordpress.org/plugins/wp2cloud-wordpress-to-cloud/
During our development of schemas orchestrations, ports, etc. We've been exporting MSI's and binding files for deployment into our test and ultimately production environment
So, for example, we set up a series of receive ports/locations in a single BizTalk app, for the purpose of receiving all HL7 v2 messages from our HCIS. We then exported that to a bindings file, and imported into test.
Then, as we developed new schemas, we exported each schema into it's own msi file and deployed that into the same BizTalk application in our test environment. We did that because the schemas are specific to the inbound messages from our HCIS.
So now, in test, we've ended up with a BizTalk application with the receive ports and schemas we need to receive messages from our HCIS. The issue I discovered is that, if I look at the installed programs list in the control panel, I only see 1 application. So if I want to uninstall and re-install a particular schema, I'm not sure what will happen. For some reason, I half expected to see an entry for every msi I installed, but I suppose that because they're all going into the same BizTalk application, they are all registered in windows as the same application. I'm betting there is a better way to do this, any suggestions?
You can, and probably should, create different applications for each logical grouping of code. If you examine the 'deploy' section of the project properties you'll see a text box to enter your application name. When you trigger a deploy they will be placed into a separate application with the name you provide. You'll see it in the BizTalk management console.
We deploy to dev using the framework mentioned below. Then to deploy to QA right click on the application and create an MSI from that point. It will allow creating an MSI for only one application.
NOTE: the deploy setting is NOT saved globally. If another developer opens the project his project will not inherit the application name you've set.
We use the biztalk deployment framework to help manage changes when we do development.
So now, in test, we've ended up with a BizTalk application with the receive ports and schemas we need to receive messages from our HCIS. The issue I discovered is that, if I look at the installed programs list in the control panel, I only see 1 application.
I can only think of two scenarios where you might observe this behaviour:
You have multiple different MSI's (once for each schema) which you are importing into BizTalk (and hence they are appearing in the BizTalk Admin Console), but you are not running the MSI on the local machine (and so it is not appearing in 'Installed Programs'); or
You MSI's are all named the same, in which case after the import into BizTalk and the local install, you only have a single program visible in 'Installed Programs'.
I'm betting there is a better way to do this, any suggestions?
With regards to approach, you are certainly along the correct lines. I tend to advise clients to group logical artifacts into a single logical bucket - either project or Application - that can be deployed (and redeployed) without affecting other parts of the system.
In a HL7 scenario, one logical bucket might be Patient artifacts (schemas and supporting maps) and a second may be Financial artifacts (schemas and supporting maps). These logical buckets can either be deployed to different BizTalk Applications, or the same BizTalk Application depending on your requirements. However, the main benefit here is that they are separate and therefore all artifacts do not need to be redeployed if you need to make a small modification to A19 - Patient Query/Response schema for example.
How to deploy is another question entirely. I'm a massive fan of MSBuild and have written comprehensive build scripts that I tweak and reuse for each project I work on. These deployment scripts will tear down an existing environment and re-build from the ground-up, creaing Applications, deploying Resources, importing Bindings, creating Hosts and Host Instances etc. before finally starting the application. This approach removes all human error from the process and tends to be favoured by clients who often have their infrastructure teams perform the deployment rather than their development teams.
I notice that Jay mentioned the use of the BizTalk Deployment Framework. I personally struggle with this tool, partly because I need to maintain my configuration in Excel which I can't check in to source control easily.
I have recently deployed my web role to Windows Azure. In the properties of my WebRole I have set Enable Diagnostics.
I can also see that it correctly maps to a storage account once deployed by viewing the configuration file of the hosted service.
I have not setup anything else for diagnostics, I am unaware that I need to do anything else.
I am now setting up AzureWatch (by paraleap) to monitor my instances however it reports that WADPerformanceCountersTable does not exist.
I am very new to Azure, don't have a clue how the diganostics work and can't find anything on Google that shows me how. Could someone please show me the way.
Ok I figured it out and will leave this here for others to follow.
Step 1
If you follow http://dunnry.com/blog/2012/02/27/SettingUpDiagnosticsMonitoringInWindowsAzure.aspx Windows Azure Diagnostics will start saving data into your attached Blob storage, full of diagnostic information.
Special Note: These count towards your storage transaction, which is why you will see them go up.
Step 2
However I needed the WADPerformanceCounterTable, which should have been located in the tables section of the storage account but it never was created. I needed this to use services like AzureWatch to monitor and spin up or down instances.
Special Note: This is performance counters, a specific subset of diagnostic information and this isn't stored in the blob section by default.
Step 3
In your project you need to add which performance counters to monitor in the WebRole.cs.
Special Note: You won't have this if you just added an existing project to an Azure deployment project. Unless you specifically started the project from scratch and chose the Azure templates, you will need to create this manually. You would also need to add: Microsoft.WindowsAzure.Diagnostics, Microsoft.WindowsAzure.ServiceRuntime and Microsoft.WindowsAzure.StorageClient as References. Best way to see how it all works is to create a blank project from an Azure template and copy over the necessary items.
Step 4
Next you need to define which performance counters to monitor. As such here is a great sample: http://code.msdn.microsoft.com/windowsazure/Windows-Azure-PerformanceCo-7d80ebf9
Extra Reference
Microsoft also has a few steps you can follow here that might help out if things still aren't working: http://msdn.microsoft.com/en-us/library/windowsazure/hh411521.aspx
Take a look at:
http://dunnry.com/blog/2012/02/27/SettingUpDiagnosticsMonitoringInWindowsAzure.aspx
There is also a lot of information on:
http://msdn.microsoft.com/en-us/library/windowsazure/gg433048.aspx
Is there a way to identify what process is using a particular DirectShow filter? Specifically a video capture filter.
If our application throws an exception trying to use a DirectShow filter because it's already in use, we would like to identify the process that is using the filter and kill it. Of course this is not a general purpose or distributed application but one installed on a dedicated computer whose sole purpose is to run our application.
Thanks,
Ideally, I think killing a process should be avoided by all means... many bad things can happen as result. That said, my proposal counts on 5 parts:
Locating the fitler dll file in the file-system.
Enumerating all processes
Enumarating all loaded modules of each process
identifying which process is using the filter.
Killing the process.
Since you did not specify any language or programming framework, I will assume C#/.net just for convenience.
1- DirectShow filters are just COM objects, so they are registered in the system as such. You need to figure out the GUI of your filter, using this GUID, you can locate the registry key where this object information is stored, then you can retrive the location of the dll in the file system from there. Microsoft.Win32.Registry can be used to access the registry.
2- System.Diagnostics.Process.GetProcesses() can be used to enumerate all running process.
3- System.Diagnostics.Process.Modules can be used to enumerate all modules (dlls) loadded by the process.
The rest of the steps should be trivial.