BizTalk2010 restart Receive location every 3 hour
we have a issue with third party sftp codeplex adapter 1.4 (receive location 'freeze' issue). There may be a fix in version 1.5 but for a short term solution is there a way to schedule a restart of one Receive location (Disable\enable)
You can use Task Scheduler to schedule a PowerShell that enables/disables your Receive Location.
Here https://biztalklive.blogspot.com.es/2017/10/powershell-script-to-enable-biztalk.html?m=1 you have a Script example and here how to schedule https://social.technet.microsoft.com/wiki/contents/articles/26747.windows-server-how-to-schedule-a-powershell-script-to-auto-run.aspx.
You can create a powershell with the following script :
#Get Receive locations
[ARRAY]$ReceiveLocations = get-wmiobject MSBTS_ReceiveLocation -namespace 'root\MicrosoftBizTalkServer' -filter {IsDisabled = "True" and (Name="Receive Location Name1" or Name="Receive Location Name2")}
Foreach ($ReceiveLocation in $ReceiveLocations)
{
#EnableReceive locations
$ReceiveLocation.InvokeMethod("Enable",$null)
}
and create a windows task scheduler to execute the script
For a temporary solution, you can put that Receive Location in it's own Host Instance and use the Windows Scheduler to Stop and Start the Service periodically.
You don't need any scripts or other complications, just the NET STOP/START commands.
The setup is described in this other SO thread: How to restart a windows service using Task Scheduler
You can find the Service name in the Properties Services Control Panel.
Related
I am testing our application in local, and it seems like i can only create 1 game session using Gamelift local.
so what i did is I run gamelift local
java -jar GameLiftLocal.jar -p 9080
run the custom gamelift server i wrote in C# and Unity
and use CLI to create game session
AWS gamelift create-game-session --endpoint-url http://localhost:9080 --maximum-player-session-count 2 --fleet-id fleet-123d
and first run, it succeed and creates the game session.
when I create another gamesession by issuing the same command above it results to
HTTP-Dispatcher - No available process.
Why is this? can we only create one Game Session in local?
If you are trying to make another game session, you need to run multiple game server processes.
GameLift can catch the game session's status by receiving Server-side API call from game server process.
I think this diagram can help you.:)
https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-sdk-server-api-interaction-vsd.html
According to the docs:
Each server process should only host a single game session.
...
When testing locally with GameLift Local, you can start multiple server processes. Each process will connect to GameLift Local.
Sounds like you need to run multiple instances of GameLiftLocal.
Source: https://docs.aws.amazon.com/gamelift/latest/developerguide/integration-testing-local.html
I've used the ansible install to run all services on a single host and have two separate physical node controllers.
Everything installed fine and all of my services are green. But I don't think image workers are launching to do my first image uploads. As I'm trying to troubleshoot I see that no node controllers are reported by:
euserv-describe-node-controllers
It doesn't return an error just blank output. I've unregistered and re-registered the two node controllers and copied the CLC admin keys with no errors but still can't see output from that command. cloud-output and the various nc log files seem to show successful startup.
I've switched to ImagingServiceAdministrator to look for imaging worker instances with this and got blank output which was what started me looking at NC's:
euca-describe-instances --filter tag-value=euca-internal-imaging-workers
The imaging service is not required for installing instance-store images, e.g.:
python <(curl -Ls https://eucalyptus.cloud/images)
or (on an ansible deployed cloud):
eucalyptus-images --size 1
To check on the status of node controllers in a deployment you will need to have cloud administrator credentials. You can check this using:
euare-getcallerid
euare-accountlist
and verifying that the eucalyptus account is being used.
Node controllers are managed via a cluster controller so you should check the status for both:
euserv-describe-services -a --filter service-type=cluster
euserv-describe-services -a --filter service-type=node
this differs from euserv-describe-node-controllers as it does not include information on running instances.
If there are any issues you can check for service events:
euserv-describe-events
and look at the logs (/var/log/eucalyptus/...) to further investigate.
Check that the IP addresses you registered node controllers using are the ones that the node controllers are listening on (NC_ADDR in /etc/eucalyptus//eucalyptus.conf)
If using firewalld restart/reload the configuration after deployment to ensure running with the latest settings.
as part of a batch job I create 4 command lines through control-m which invoke a legacy console application written in VB6. The console application invokes an ActiveEx server which performs a set of analytic jobs calculating outputs. The ActiveEx server was coded as a singleton but when invoked through control-m I get 4 instances running. the ActiveEx server does not tear down once the job has completed and the command line has closed it self.
I created 4 .bat files which once launced manually on the server, simulate the calls made through control-m and the ActiveEx server behaves as expected, i.e. there is only 1 instance ever running and once complete it closes down gracefully.
What am I doing wrong?
Control-M jobs are run under a service account and it same as we login as a user and execute a job. How did you test this? Did you manually executed each batch job one after another or you have executed all the batch job at the same time from different terminals? You can do one thing. Run the control-M jobs with a time interval like first one at 09.00 second one at 09.05, third one at 09.10 and forth one at 09.15 and see if that fix your issue.
Maybe your job cannot use the Desktop environment.
Check your agent service settings:
Log on As:
User account under which Control‑M Agent service will run.
Valid values:
Local System Account – Service logs on as the system account.
Allow Service to Interact with Desktop – This option is valid only if the service is running as a local system account.
Selected – the service provides a user interface on a desktop that can
be used by whoever is logged in when the service is started. Default.
Unselected – the service does not provide a user interface.
This Account – User account under which Control‑M Agent service will run.
NOTE: If the owner of any Control-M/Server jobs has a "roaming profile" or if job output (OUTPUT) will be copied to or from other computers, the Log in mode must be set to This Account.
Default: Local System Account
I outlined my small project in a different post - to summarize it again quickly, I am trying to do the following:
Write an R script that pulls data from a website
Schedule the R script to automatically run daily at the same time
Write / append the R script's output to a database
I am familiar with R web-scraping packages (rvest, rselenium) for doing the first bullet. For the 2nd bullet, just today I learned how to create a crontab to run my script when I desire, however the crontab does not run the script when my computer is off, or so I've read.
How can I have it such that the crontab is run even with my computer off? I am somewhat (not really) familiar with EC2 instances, but if I have my R script in an EC2 instance, could I schedule a crontab for the script there and then it would run with my computer off?
Thanks in advance for help!
Since cron is a service that runs on the instance you can't have it start the EC2 instance for you - it's a catch-22.
You can treat EC2 instances as computers that run in someone else's cellar (most of the time at least). You wouldn't expect a computer to run code when it's not turned on and it's exactly the same for an EC2 instance.
I suggest you consider if this is really the setup you want, it sounds to me that you'd be better served using AWS Lambda combined with one of Amazon's hosted data stores (RDS, DynamoDB, SimpleDB, or even S3). The downside here is that you're limited to JavaScript, Python, and Java and as such can't use R (well, you can, but it's messy since you'll have to package everything you need in a JS/Python/Java app and start it from there).
If you really want to run your R script on the EC2 instance you can start the instance with a lambda and then shut it down from your script. Just make sure your instance isn't set to terminate on shutdown.
Regardless of what path you chose you will need to create a lambda and run it from a scheduled CloudWatch Event.
Then you just need to implement the lambda, either to run your script or to use the EC2 API to start the instance.
If you use the lambda to start the EC2 instance you should not use cron on the instance to run the script at a specific time, but run it on startup. Then you have your script shut down the instance when it's finished.
Here's an example Python script for starting an EC2 instance from a lambda to get your started:
import logging
import boto3
# Set up logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
# Set up a boto session to get credentials and region
session = boto3.session.Session()
# Set up EC2
ec2 = session.resource("ec2")
# The instance to start
instance_id = "i-1234567890abcd"
def lambda_handler(event, context):
logger.info('Start handling event.')
logger.info('Starting instance ' + instance_id)
instance = ec2.Instance(instance_id)
response = instance.start()
try:
current_state = response['StartingInstances'][0]['CurrentState']
except (KeyError, IndexError) as e:
logger.warn('Unexpected response when starting instance: {}'.format(response))
else:
if current_state not in ('pending', 'running'):
logger.warn('Instance {} is in unexpected state {} after starting'.format(id, current_state))
else:
logger.info('Started instance ' + instance_id)
I have recently started using spark and I want to run spark job from Spring web application.
I have a situation where I am running web application in Tomcat server using Spring boot.My web application receives a REST web service request based on that It needs to trigger spark calculation job in Yarn cluster. Since my job can take longer to run and can access data from HDFS, so I want to run the spark job in yarn-cluster mode and I don't want to keep spark context alive in my web layer. One other reason for this is my application is multi tenant so each tenant can run it's own job, so in yarn-cluster mode each tenant's job can start it's own driver and run in it's own spark cluster. In web app JVM, I assume I can't run multiple spark context in one JVM.
I want to trigger spark jobs in yarn-cluster mode from java program in the my web application. what is the best way to achieve this. I am exploring various options and looking your guidance on which one is best
1) I can use spark-submit command line shell to submit my jobs. But to trigger it from my web application I need to use either Java ProcessBuilder api or some package built on java ProcessBuilder. This has 2 issues. First it doesn't sound like a clean way of doing it. I should have a programatic way of triggering my spark applications. Second problem will be I will loose the capability of monitoring the submitted application and getting it's status.. Only crude way of doing it is reading the output stream of spark-submit shell, which again doesn't sound like good approach.
2) I tried using Yarn client to submit the job from spring application. Following is the code that I use to submit spark job using Yarn Client:
Configuration config = new Configuration();
System.setProperty("SPARK_YARN_MODE", "true");
SparkConf conf = new SparkConf();
ClientArguments cArgs = new ClientArguments(sparkArgs, conf);
Client client = new Client(cArgs, config, conf);
client.run();
But when I run the above code, it tries to connect on localhost only. I get this error:
5/08/05 14:06:10 INFO Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 15/08/05 14:06:12 INFO Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
So I don't think it can connect to remote machine.
Please suggest, what is best way of doing this with latest version of spark. Later I have plans to deploy this entire application in amazon EMR. So approach should work there also.
Thanks in advance
Spark JobServer might help:https://github.com/spark-jobserver/spark-jobserver, this project receives RESTful web requests and start a spark job. Results is returned as json response.
I also had similar issues trying to run Spark app that connects to YARN cluster - having no cluster config it was trying to connect to the local machine as for the main node of the cluster, which obviously failed.
It worked for me when I've placed core-site.xml and yarn-site.xml into the classpath (src/main/resources in typical sbt or Maven project structure) - application correctly connected to the cluster.
When using spark-submit location of those files is typically specified by HADOOP_CONF_DIR environment variable, but for stand-alone application it didn't have effect.