My requirement is to run Intel PIN tool for specified amount of time lets say around 1 minute and then terminate.
For example:
I want to run notepad.exe for 1 minute under PIN. After 1 minute do post processing and close the log files properly and terminate notepad.exe using PINTool.
Use PIN_ExitApplication() to achieve this. You can perform post processing in the Fini callback.
Related
Aim:
I am working on Qt Creator and want to interface Modem with my main program
, for that, I need to fetch the data from the modem for a specific time interval (say 3 seconds) only and after that I have to stop receiving data.
My Work:
I tried to implement it using:
QTimer::singleShot(3000,this,SLOT(SLOT1()));
But the limitation with it:
It calls the SLOT1 after 3000 m sec, but this is not what I want.
My requirement:
is to use timer in order to fetch data for 3 seconds only and then stop.
Trying to use clGetEventProfilingInfo for timing my kernels.
Is there any facility to give no. of iterations before which the values of start time and end time are reported?
If the kernel is run only once then , of ourse it has lots of over heads associated with it. So to get the best timing we should run the kernel several times and take the average time.
Do we have such a parameter in profiling using API? (We do have such parameters when we use third party software Tools for profiling)
The clGetEventProfilingInfo function will return profiling information for a single event, which corresponds to a single enqueued command. There is no built-in mechanism to automatically report information across a number of calls; you'll have to code that yourself.
It's pretty straightforward to do - just query the start and end times for each event you care about and add them up. If you are only running a single kernel (in a loop), then you could just use a wall-clock timer (with clFinish before you start and stop timing), or take the difference between the time the first event started and the last event finished.
I need a simple way to run a program using digital write for a certain number of seconds.
I am driving two DC Motors. I already have my setup complete, and have driven the motors using pause() and digitalWrite(). I will be making time measurements in milliseconds
Adjustable runtime, and would preferable have non-blocking code.
You could use a timer-driven interrupt triggering code execution which will handle the output (decrementing required time value and eventually switching off the output) or use threads.
I would suggest using threads.
Your requirement is similar to a "blinking diodes" case I described in a different thread
If you replace defines setting time intervals and use variables instead you could use this code to drive outputs or simplify the whole code by using only one thread working the same way aforementioned timer interrupt would work.
If you would like to try timer interrupt-driven approach this post gives a good overview and examples (but you have to change OCR1A to about 16 to get overflow each 1ms) .
I have two questions first is the main one.
1. I was able to display date in a cics map but what i need is, i want it to be ticking i.e., it should be display everysecond updated.
2. I have a COBOL-DB2 program which automatically inserts the data from database(DB2) to a file. I want this program to be called on a timestamp basis i.e., every 1hr, 2hr, or every day.
Thank you
You can do this, but you will need to change modify traditional psuedo-conversationl approach. Instead of returning and waiting for a user event, you can start your tran after some number of seconds with your current commarea and quit. If a user event occurs in that time, you can cancel your start request, if it doesn't, you can refresh the screen timestamp and repeat.
It is kinda a pain just to get a timestamp refreshed. Doesn't make much sense to bother with unless you have a really good reason.
The DB2 stuff is plain easy. Start your tran using interval control, the same START AFTER() described above, and you can have it run hourly, or bihourly, or whatever.
I don't think that you need to modify your pseudo-conversational approach to achieve what you need. Just issue a EXEC CICS START command with a one second delay (just do this once) for a small program that just issues a Send Map (or TC Write) to the terminal facility. Ideally reserve a common area on the screen so all transactions can use a common program. At some point, when the updates are no longer required, CANCEL the START request.The way I see it, the timer update transaction will mix in nicely with you user-initiated transaction flow. If a user transaction is active when the start timer pops, the timer update program will just be delayed a little.
While this should work, you need to bear in mind that you might be driving 3,600 transactions per hour for each user. Is this feature really worth all that?
This is not possible in standard CICS using maps. The 3270 protocol does not lend itself to continually updating screens. The majority of automatic updating screens such as consoles and monitoring displays use native VTAM methods, building their own data streams.
It might be possible to do this using unformatted data, but I would not recommend it in CICS. Pseudo-conversational CICS does not have a program in control during screen display, and conversational programming is highly discouraged.
You can't really do this in CICS, which was designed for pseudo-interactive responses at best. It was designed for use on mainframes where your terminal was sent a whole page or screen, the program read the screen as received (which has some fields the user would update and if you didn't change them the terminal did not send the data back) then, the CICS transaction having taken a part of a screen containing changes, sends the response back and quits.
This makes for very efficient data entry and inquiry programs. But realize, when the program has finished processing the screen, it's quit, it's gone, and it's not even in memory any more, all the resources have been reclaimed. This allows the company to run a mainframe with 300 terminals and maybe 10 megabytes of real memory, because when the program is waiting for you to respond, it's not using any resources at all, if there are 200 people running a data entry program, they are running a re-entrant program in which all 200 of them are running the same copy of the same program and the only thing they're using is maybe 1K of writable storage per user for the part that has to read a screen or a file record and do some calculations. Think about that, 200 people are running the same program and all of them, simultaneously, are using one module that uses 20K of memory for the application - and it's the same 20K for every single one of them - and 1K each of actual read/write data.
Think about that for a moment, the first user to start that data entry program uses 20K of memory for the application, plus 1K for the writable data. Each user after that who is being processed on that program uses an additional 1K of memory, that's all. When they're sitting there looking at the terminal, all they might be using is 4 bytes in a table to tell the system there's a terminal connected. No resources are used at all.
To be able to have a screen updated on a regular basis means that something has to keep running, which is not something CICS does very well. CICS is not intended to be used for interactive processing the way a PC does because you're actually running live on the PC.
EXEC CICS ASK TIME END-EXEC to update the timestamp.
EXEC CICS SEND MAP DATA ONLY END-EXEC to update the screen.
However, using the suggested
EXEC CICS START TRANSID ('name' | namefld)
DELAY (time)
END-EXEC.
is actually the better way.
I am writing a script to capture disk usage on a system (yes, I know there is software that can do this). For database reporting purposes, I want the interval between data points to be as equal as possible. For example, if I am polling disk usage every 10 minutes, I want every data point to be YYYY-MM-DD HH:[0-5]0:00. If I'm am polling every 5 minutes, it would be YYYY-MM-DD HH:[0-5][05]:00.
If I have a ksh script (or even a Perl script) to capture the disk usage, how can I let the script come active and wait for the next "Poll time" before taking a snapshot, and then sleep for the correct number of seconds until the next "Poll time". If I am polling every 5 minutes, and it is 11:42:00, then I want to sleep for 180 seconds so it will take a snapshot at 11:45:00 - and then sleep for 5 minutes so it will take another snapshot at 11:50:00.
I wrote a way that works if my poll time is every 10 minutes, but if I change the poll time to a different number, it doesn't work. I would like it to be flexible on the poll time.
I prefer to do this in shell script, but if it is way too much code, Perl would be fine too.
Any ideas on how to accomplish this?
Thanks in advance!
Brian
EDIT: Wow - I left out a pretty important part - that cron is disabled, so I will not be able to use cron for this task. I am very sorry to all the people who gave that as an answer, because yes, that is the perfect way to do what I wanted, if I could use cron.
I will be using our scheduler to kick off my script right before midnight every day, and I want the script to handle running at the exact "poll times", sleeping in between, and exiting at midnight.
Again, I'm very sorry for not clarifying on crontabs.
cron will do the job.
http://en.wikipedia.org/wiki/Cron
Just configure it to run your ksh script at the times you need and you are done
You might want to consider using cron. This is exactly what it was made for.
If I were doing this, I would use the system scheduler (cron or something else) to schedule my program to run every 180 seconds.
EDIT: I might have misunderstood your request. Are you looking more for something along the following lines? (I suspect there is a bug or two here):
ANOTHER EDIT: Remove dependency on Time::Local (but now I suspect more bugs ;-)
#!/usr/bin/perl
use strict;
use warnings;
use POSIX qw( strftime );
my $mins = 5;
while ( 1 ) {
my ($this_sec, $this_min) = (localtime)[0 .. 1];
my $next_min = $mins * ( 1 + int( $this_min / $mins ) );
my $to_sleep = 60 * int( $next_min - $this_min - 1 )
+ 60 - $this_sec;
warn strftime('%Y:%m:%d %H:%M:%S - ', localtime),
"Sleeping '$to_sleep' seconds\n";
sleep $to_sleep;
}
__END__
Have it sleep for a very short time, <=1 sec, and check each time whether poll time has arrived. Incremental processor use will be negligible.
Edit: cron is fine if you know what interval you will use and don't intend to change frequently. But if you change intervals often, consider a continuously running script w/ short sleep time.
Depending on how fine grained your time resolution needs to be, there may be a need to write your script daemon style. Start it once, while(1) and do the logic inside the program (you can check every second until it's time to run again).
Perl's Time::HiRes allows very fine granularity if you need it.