I'm trying to come up with a WinDbg command line expression that takes the output of the !DumpHeap command and for each address, reads a 64-bit value from offset 0x08 after the address. I think this is possible (not sure about it) but every attempt I made so far fails with some error.
I searched a lot but most WinDbg articles show simple examples which I can try but my attempts fail.
I have a process dump of an ASP.NET worker process. The process has some memory growth but there's no clear offender so I'm trying to list a number of objects that appear many times in memory. I'm using sos.dll for the managed debugging WinDbg extensions.
Here's what I'm trying to do
.foreach(myaddress {!dumpheap -short -mt 000007fe998adea8})
{r #$t0=poi(myaddress+0x8);!do #$t0;.echo ************* myaddress}
Note, that the above command must be on a single line - I only added a line break for better readability here.
For the above line, WinDbg prints this error: Couldn't resolve error at 'myaddress+0x8);!do #$t0;.echo ************* 00000001003cb870'.
I'm trying to iterate through all addresses returned by !DumpHeap - each address should go into the myaddress variable. Then, for each address, I'm trying to set the $t0 user register to the value read from myaddress+0x8. The !do (!DumpObject) command would then dump the object at that address.
If I run only (again, on one line in WinDbg):
.foreach(myaddress {!dumpheap -short -mt 000007fe998adea8})
{!do myaddress;.echo ************* myaddress}
I get a list of object dumps but this is one level higher than what I need. I want to drill down one level deeper and dump a particular member of these top-level objects that I'm iterating through.
Is this possible or am I on the wrong track with this?
After further searching, I found that I was using the wrong syntax. According to question and to MSDN, variable names must be surrounded by spaces or must be enclosed in ${...} to work. After I used the ${} enclosure, my script started working.
For future reference, here's how to run the script (keep it on one line in WinDbg):
.foreach(myaddress {!dumpheap -short -mt 000007fe998adea8})
{r #$t0=poi(${myaddress}+0x8);!do #$t0;.echo ************* myaddress}
yes you need space around the aliases
.foreach ( place { .shell -ci "!DumpHeap -stat" sed 1,3d | awk "{print
$1 }" } ) { .foreach (plays { !DumpHeap -short -mt place } ) { r $t0 =
poi( plays + 8 ) ; !do #$t0 ; .echo
========================================= } }
Related
I want to execute a batch file using People code in Application Engine Program. But The program have an issue returning Exec code as a non zero value (Value - 1).
Below is people code snippet below.
Global File &FileLog;
Global string &LogFileName, &Servername, &commandline;
Local string &Footer;
If &Servername = "PSNT" Then
&ScriptName = "D: && D:\psoft\PT854\appserv\prcs\RNBatchFile.bat";
End-If;
&commandline = &ScriptName;
/* Need to commit work or Exec will fail */
CommitWork();
&ExitCode = Exec("cmd.exe /c " | &commandline, %Exec_Synchronous + %FilePath_Absolute);
If &ExitCode <> 0 Then
MessageBox(0, "", 0, 0, ("Batch File Call Failed! Exit code returned by script was " | &ExitCode));
End-If;
Any help how to resolve this issue.
Best bet is to do a trace of the execution.
Thoughts:
Can you log on the the process scheduler you are running this on and execute the script OK?
Is the AE being scheduled or called at run-time?
You should not need to change directory as you are using a fully qualified path to the script.
you should not need to call "cmd /c" as this will create an additional shell for you application to run within, making debuging harder, etc.
Run a trace, and drop us the output. :) HTH
What about changing the working directory to D: inside of the script instead? You are invoking two commands and I'm wondering what the shell is returning to exec. I'm assuming you wrote your script to give the appropriate return code and that isn't the problem.
I couldn't tell from the question text, but are you looking for a negative result, such as -1? I think return codes are usually positive. 0 for success, some other positive number for failure. Negative numbers may be acceptable, but am wondering if Exec doesn't like negative numbers?
Perhaps the PeopleCode ChDir function still works as an alternative to two commands in one line? I haven't tried it for a LONG time.
Another alternative that gives you significant control over the process is to use java.lang.Runtime.exec from PeopleCode: http://jjmpsj.blogspot.com/2010/02/exec-processes-while-controlling-stdin.html.
All, I am running the following script to load the data on to the Oracle Server using unix box and sqlldr. Earlier it gave me an error saying sqlldr: command not found. I added "SQLPLUS < EOF", it still gives me an error for unexpected end of file syntax error on line 12 but it is only 11 line of code. What seems to be the problem according to you.
#!/bin/bash
FILES='ls *.txt'
CTL='/blah/blah1/blah2/name/filename.ctl'
for f in $FILES
do
cat $CTL | sed "s/:FILE/$f/g" >$f.ctl
sqlplus ID/'PASSWORD'#SERVERNAME << EOF sqlldr SCHEMA_NAME/SCHEMA_PASSWORD control=$f.ctl data=$f EOF
done
sqlplus will never know what to do with the command sqlldr. They are two complementary cmd-line utilities for interfacing with Oracle DB.
Note NO sqlplus or EOF etc required to load data into a schema:
#!/bin/bash
#you dont want this FILES='ls *.txt'
CTL_PATH=/blah/blah1/blah2/name/'
CTL_FILE="$CTL_PATH/filename.ctl"
SCHEMA_NM=SCHEMA_NAME
SCHEMA_PSWD=SCHEMA_PASSWORD
for f in *.txt
do
# don't need cat! cat $CTL | sed "s/:FILE/$f/g" >"$f".ctl
sed "s/:FILE/$f/g" "$CTL_FILE" > "$CTL_PATH/$f.ctl"
#myBad sqlldr "$SCHEMA_NAME/$SCHEMA_PASSWORD" control="$CTL_PATH/$f.ctl" data="$f"
sqlldr $SCHEMA_USER/$SCHEMA_PASSWORD#$SERVER_NAME control="$CTL_PATH/$f.ctl" data="$f" rows=10000 direct=true errors=999
done
Without getting too philosophical, using assignments like FILES=$(ls *.txt) is a bad habit to get into. By contrast, for f in *.txt will deal correctly for files with odd characters in them (like spaces or other syntax breaking values). BUT the other habit you do want to get into is to quote all variable references (like $f), with dbl-quotes : "$f", OK? ;-) This is the otherside of protection for files with spaces etc embedded in them.
In the edit update, I've varibalized your CTL_PATH and CTL_FILE. I think I understand your intent, that you have 1 std CTL_FILE that you pass thru sed to create a table specific .ctl file (a good approach in my experience). Note that you don't need to use cat to send a file to sed, but your use to create a altered file via redirection (> $f.ctl) is very shell-like too.
In 2nd edit update, I looked here on S.O. and found an example sqlldr cmdline that has the correct syntax and have modified to work with your variable names.
To finish up,
A. Are you sure the Oracle Client package is installed on the machine
that you are running your script on?
B. Is the /path/to/oracle/client/tools/bin included in your working
$PATH?
C. try which sqlldr. If you don't get anything, either its not
installed or its not in the path.
D. If not installed, you'll have to get it installed.
E. Once installed, note the directory that contains the sqlldr cmd.
find / -name 'sqlldr*' will take a long time to run, but it will
print out the path you want to use.
F. Take the "path" part of what is returned (like
/opt/oracle/11.2/client/bin/ (but not the sqlldr at the end), and
edit script at 2nd line with
(Txt added to appease the S.O. Formatter ;-) )
export ORCL_PATH="/path/you/found/to/oracle/client"
export PATH="$ORCL_PATH:$PATH"
These steps should solve any remaining issues. If this doesn't work, see if there is someone where you work that understands your local computing environment that can help explain any missing or different steps.
IHTH
In Unix, it's possible to create a handle to an anonymous file by, e.g., creating and opening it with creat() and then removing the directory link with unlink() - leaving you with a file with an inode and storage but no possible way to re-open it. Such files are often used as temp files (and typically this is what tmpfile() returns to you).
My question: is there any way to re-attach a file like this back into the directory structure? If you could do this it means that you could e.g. implement file writes so that the file appears atomically and fully formed. This appeals to my compulsive neatness. ;)
When poking through the relevant system call functions I expected to find a version of link() called flink() (compare with chmod()/fchmod()) but, at least on Linux this doesn't exist.
Bonus points for telling me how to create the anonymous file without briefly exposing a filename in the disk's directory structure.
A patch for a proposed Linux flink() system call was submitted several years ago, but when Linus stated "there is no way in HELL we can do this securely without major other incursions", that pretty much ended the debate on whether to add this.
Update: As of Linux 3.11, it is now possible to create a file with no directory entry using open() with the new O_TMPFILE flag, and link it into the filesystem once it is fully formed using linkat() on /proc/self/fd/fd with the AT_SYMLINK_FOLLOW flag.
The following example is provided on the open() manual page:
char path[PATH_MAX];
fd = open("/path/to/dir", O_TMPFILE | O_RDWR, S_IRUSR | S_IWUSR);
/* File I/O on 'fd'... */
snprintf(path, PATH_MAX, "/proc/self/fd/%d", fd);
linkat(AT_FDCWD, path, AT_FDCWD, "/path/for/file", AT_SYMLINK_FOLLOW);
Note that linkat() will not allow open files to be re-attached after the last link is removed with unlink().
My question: is there any way to re-attach a file like this back into the directory structure? If you could do this it means that you could e.g. implement file writes so that the file appears atomically and fully formed. This appeals to the my compulsive neatness. ;)
If this is your only goal, you can achieve this in a much simpler and more widely used manner. If you are outputting to a.dat:
Open a.dat.part for write.
Write your data.
Rename a.dat.part to a.dat.
I can understand wanting to be neat, but unlinking a file and relinking it just to be "neat" is kind of silly.
This question on serverfault seems to indicate that this kind of re-linking is unsafe and not supported.
Thanks to #mark4o posting about linkat(2), see his answer for details.
I wanted to give it a try to see what actually happened when trying to actually link an anonymous file back into the filesystem it is stored on. (often /tmp, e.g. for video data that firefox is playing).
As of Linux 3.16, there still appears to be no way to undelete a deleted file that's still held open. Neither AT_SYMLINK_FOLLOW nor AT_EMPTY_PATH for linkat(2) do the trick for deleted files that used to have a name, even as root.
The only alternative is tail -c +1 -f /proc/19044/fd/1 > data.recov, which makes a separate copy, and you have to kill it manually when it's done.
Here's the perl wrapper I cooked up for testing. Use strace -eopen,linkat linkat.pl - </proc/.../fd/123 newname to verify that your system still can't undelete open files. (Same applies even with sudo). Obviously you should read code you find on the Internet before running it, or use a sandboxed account.
#!/usr/bin/perl -w
# 2015 Peter Cordes <peter#cordes.ca>
# public domain. If it breaks, you get to keep both pieces. Share and enjoy
# Linux-only linkat(2) wrapper (opens "." to get a directory FD for relative paths)
if ($#ARGV != 1) {
print "wrong number of args. Usage:\n";
print "linkat old new \t# will use AT_SYMLINK_FOLLOW\n";
print "linkat - <old new\t# to use the AT_EMPTY_PATH flag (requires root, and still doesn't re-link arbitrary files)\n";
exit(1);
}
# use POSIX qw(linkat AT_EMPTY_PATH AT_SYMLINK_FOLLOW); #nope, not even POSIX linkat is there
require 'syscall.ph';
use Errno;
# /usr/include/linux/fcntl.h
# #define AT_SYMLINK_NOFOLLOW 0x100 /* Do not follow symbolic links. */
# #define AT_SYMLINK_FOLLOW 0x400 /* Follow symbolic links. */
# #define AT_EMPTY_PATH 0x1000 /* Allow empty relative pathname */
unless (defined &AT_SYMLINK_NOFOLLOW) { sub AT_SYMLINK_NOFOLLOW() { 0x0100 } }
unless (defined &AT_SYMLINK_FOLLOW ) { sub AT_SYMLINK_FOLLOW () { 0x0400 } }
unless (defined &AT_EMPTY_PATH ) { sub AT_EMPTY_PATH () { 0x1000 } }
sub my_linkat ($$$$$) {
# tmp copies: perl doesn't know that the string args won't be modified.
my ($oldp, $newp, $flags) = ($_[1], $_[3], $_[4]);
return !syscall(&SYS_linkat, fileno($_[0]), $oldp, fileno($_[2]), $newp, $flags);
}
sub linkat_dotpaths ($$$) {
open(DOTFD, ".") or die "open . $!";
my $ret = my_linkat(DOTFD, $_[0], DOTFD, $_[1], $_[2]);
close DOTFD;
return $ret;
}
sub link_stdin ($) {
my ($newp, ) = #_;
open(DOTFD, ".") or die "open . $!";
my $ret = my_linkat(0, "", DOTFD, $newp, &AT_EMPTY_PATH);
close DOTFD;
return $ret;
}
sub linkat_follow_dotpaths ($$) {
return linkat_dotpaths($_[0], $_[1], &AT_SYMLINK_FOLLOW);
}
## main
my $oldp = $ARGV[0];
my $newp = $ARGV[1];
# link($oldp, $newp) or die "$!";
# my_linkat(fileno(DIRFD), $oldp, fileno(DIRFD), $newp, AT_SYMLINK_FOLLOW) or die "$!";
if ($oldp eq '-') {
print "linking stdin to '$newp'. You will get ENOENT without root (or CAP_DAC_READ_SEARCH). Even then doesn't work when links=0\n";
$ret = link_stdin( $newp );
} else {
$ret = linkat_follow_dotpaths($oldp, $newp);
}
# either way, you still can't re-link deleted files (tested Linux 3.16 and 4.2).
# print STDERR
die "error: linkat: $!.\n" . ($!{ENOENT} ? "ENOENT is the error you get when trying to re-link a deleted file\n" : '') unless $ret;
# if you want to see exactly what happened, run
# strace -eopen,linkat linkat.pl
Clearly, this is possible -- fsck does it, for example. However, fsck does it with major localized file system mojo and will clearly not be portable, nor executable as an unprivileged user. It's similar to the debugfs comment above.
Writing that flink(2) call would be an interesting exercise. As ijw points out, it would offer some advantages over current practice of temporary file renaming (rename, note, is guaranteed atomic).
Kind of late to the game but I just found http://computer-forensics.sans.org/blog/2009/01/27/recovering-open-but-unlinked-file-data which may answer the question. I haven't tested it, though, so YMMV. It looks sound.
I'm using the excellent UNIX 'comm' command line utility in an application which I developed on a BSD platform (OSX). When I deployed to my Linux production server I found out that sadly, Ubuntu Linux's 'comm' utility does not take the -i flag to indicate that the lines should be compared case-insensitive. Apparently the POSIX standard does not require the -i option.
So... I'm in a bind. I really need the -i option that works so well on BSD. I've gone so far to try to compile the BSD comm.c source code on the Linux box but I got:
http://svn.freebsd.org/viewvc/base/user/luigi/ipfw3-head/usr.bin/comm/comm.c?view=markup&pathrev=200559
me#host:~$ gcc comm.c
comm.c: In function ‘getline’:
comm.c:195: warning: assignment makes pointer from integer without a cast
comm.c: In function ‘wcsicoll’:
comm.c:264: warning: assignment makes pointer from integer without a cast
comm.c:270: warning: assignment makes pointer from integer without a cast
/tmp/ccrvPbfz.o: In function `getline':
comm.c:(.text+0x421): undefined reference to `reallocf'
/tmp/ccrvPbfz.o: In function `wcsicoll':
comm.c:(.text+0x691): undefined reference to `reallocf'
comm.c:(.text+0x6ef): undefined reference to `reallocf'
collect2: ld returned 1 exit status
Does anyone have any suggestions as to how to get a version of comm on Linux that supports 'comm -i'?
Thanks!
You can add the following in comm.c:
void *reallocf(void *ptr, size_t size)
{
void *ret = realloc(ptr, size);
if (ret == NULL) {
free(ptr);
}
return ret;
}
You should be able to compile it then. Make sure comm.c has #include <stdlib.h> in it (it probably does that already).
The reason your compilation fails is because BSD comm.c uses reallocf() which is not a standard C function. But it is easy to write.
#OP ,there's no need to go to such length as to do your own src code compilation . Here's an alternative suggestion. Since you want case insensitive, you can just convert the cases in both files to lower (or upper case) using another tool such as tr before you pass the files to comm.
tr '[A-Z]' '[a-z]' <file1 > temp1
tr '[A-Z]' '[a-z]' <file2 > temp2
comm temp1 temp2
You could try to cat both files and pipe them to uniq -c -i. It'll show all lines in both files, with the number of appearances in the first column. As long as the original files don't have repeated lines, all lines with the first column >1 are lines common to both files.
Hope it helps!
Does anyone have any suggestions as to how to get a version of comm on Linux that supports 'comm -i'?
Not quite that; but have you checked if your requirements could be satisfied by the join utility? This one does have the -i option on Linux...
I have created a simple test application
with the following code
var i : int;
for (i=0; i<3000000; i++){
trace(i);
}
When I run the application, it's very slow to load, which means the "trace" is running.
I check the flash player by right-clicking, the debugger option is not enable.
So I wonder if there is an option to put in compiler to exclude the trace.
Otherwise, I have to remove manually all the trace in the program.
Are there any other options of compiler to optimize the flex application in a maximum way?
There is a really sweet feature built into Flex called the logging API (you can read more about it here http://livedocs.adobe.com/flex/3/html/logging_09.html).
Basically, you log (trace) things in a different way, admittedly with slightly more code than a standard trace, but it allows you much greater flexibility. This is an example:
import mx.logging.Log;
Log.getLogger("com.edibleCode.logDemo").info("This is some info");
Log.getLogger("com.edibleCode.logDemo").error("This is an error");
Then all you need to do is create a trace target in your main application file, something like:
<mx:TraceTarget id="logTarget" fieldSeparator=" - " includeCategory="true" includeLevel="true" includeTime="true">
<mx:filters>
<mx:Array>
<mx:String>*</mx:String>
</mx:Array>
</mx:filters>
<!--
0 = ALL, 2 = DEBUG, 4 = INFO, 6 = WARN, 8 = ERROR, 1000 = FATAL
-->
<mx:level>0</mx:level>
</mx:TraceTarget>
And register the trace with:
Log.addTarget(logTarget);
This provides several benefits over the normal trace:
You can filter (turn off) traces to only see what you want:
Either by modifying the filters array
Or the level to show only error or fatal messages
You can replace the trace target with any other type of logging interface, e.g.
A TextField
A text file
Use conditional compilation, more here.
In your code set:
CONFIG::debugging {
trace(i);
}
Then go to Project->Properties->Flex Compiler and add
-define=CONFIG::debugging,false
or
-define=CONFIG::debugging,true
You could do a find/replace on the entire project. search for 'trace(' and replace with '//trace('. That would be quick enough and easily undone.
The mxmlc argument debug allows you to add or remove debug features from SWF files. The value of the debug argument is false by default for the command line compiler, but in Flex Builder, you have to manually create a non-debug SWF. According to the documentation on compiler arguments, debug information added to the SWF includes "line numbers and filenames of all the source files". There is no mention of trace() function calls, and I don't think there's a way to remove them through a compiler argument, but you're welcome to check the linked document for the entire list of available arguments.
There are two compiler options that you should set: -debug=false -optimize=true. In Flex Builder or Eclipse, look under Project->Properties->Flex Compiler and fill in the box labeled "Additional compiler arguments."
Go to your flex code base directory (and shut down Flex Builder if its running - it gets uppity if you change things while it's running). Run this to change all your trace statements. I recommend checking the tree into git or something first and then running a diff afterwards (or cp -r the tree to do a diff -r or something). The only major case this will mess up is if you have semicolons inside trace strings:
find . -name '*.as' -exec perl -pe 'BEGIN{ undef $/; }s/trace([^;]*);/CONFIG::debugging { trace $1 ; };/smg;' -i {} \;
find . -name '*.mxml' -exec perl -pe 'BEGIN{ undef $/; }s/trace([^;]*);/CONFIG::debugging { trace $1 ; };/smg;' -i {} \;
Then set up the following in your Project->Properties->Flex Compiler->Additional compiler arguments:
-define=CONFIG::debugging,true -define=CONFIG::release,false
And use:
CONFIG::release { /* code */ }
for the "#else" clause. This was the solution I picked after reading this question and answer set.
Also beware this:
if( foo )
{
/*code*/
}
else
CONFIG::debugging { trace("whoops no braces around else-clause"); };
I.e. if you have ONLY one of these in an if or else or whatever block, and its a naked block with no braces, then regardless of whether it's compiled out, it will complain.
Something else you could do is define a boolean named debugMode or something in an external constants .as file somewhere and include this file in any project you use. Then, before any trace statement, you could check the status of this boolean first. This is similar to zdmytriv's answer.
Have to say, I like edibleCode's answer and look forward to trying it some time.