Getting Started with SyntaxNet (start parsing text right away with Parsey McParseface) - syntaxnet

I am new to SyntaxNet and I recently tried to install it step by step from https://github.com/tensorflow/models/blob/master/syntaxnet/README.md#instalation.
Although after running bazel test it was said that "Executed 12 out of 12 tests: 12 tests pass"
when I used this code
ubuntu#ubuntu-VirtualBox:~/Downloads/git-2.7.4/models/syntaxnet$
echo 'Bob brought the pizza to Alice.' |syntaxnet/demo.sh
it gives me this error:
syntaxnet/demo.sh: line 31: bazel-bin/syntaxnet/parser_eval:
No such file or directory
syntaxnet/demo.sh: line 43: bazel-bin/syntaxnet/parser_eval:
No such file or directory
syntaxnet/demo.sh: line 55: bazel-bin/syntaxnet/conll2tree:
No such file or directory
I would really appreciate if anyone could help me.
Thank you so much

I had the same issue.
To fix it, modify the demo.sh file, lines 31 and 55.
The locations it points to find parser_eval and conll2tree are wrong, at least they were in my system.
Do a search for "sudo find / -iname 'parser_eval'".
For me the location of this file was "/home/jesus/.cache/bazel/_bazel_jesus/afbbfe6033ddfb6168467a72894e5682/syntaxnet/bazel-out/local-opt/bin/syntaxnet/parser_eval"
I then proceeded to point line 31 to this location instead of "bazel-bin/syntaxnet/parser_eval".
Then did the same for line 55 and conll2tree.
Saved the file, and got it running.
Hope it helps

I had a similar problem, in case it might be useful to anyone. If you rename or move the path where syntaxnet is installed, you'll break half a dozen symbolic links it creates during installation (it uses absolute paths). In that case, you have to recreate the links with the new path.

Related

Colab not recognising an existing directory

I have been trying to run an openpose model on colab but havent been able to do so because Colab doesn't recognise the directory. Screenshot of code
I have provided the code screenshot in this message, any help or direction will be highly appreciated!
Edit 1: A modification from the first answer
code:
!cd openpose && ./build/examples/openpose/openpose.bin -image_dir /drive/My\ Drive/research_project/Fall\ Detection/$category/testdata/video$video --render_pose 0 --disable_blending -keypoint_scale 3 --display 0 -write_json /drive/My\ Drive/research_project/Fall\ Detection/$category/jsondata/video$video
output:
Error:
Folder /drive/My Drive/research_project/Fall Detection/Coffee_room/testdata/video0/ does not exist.
I believe you need to remove the '..', as you are already in the '/content' folder from the os.chdir('/content') command
If that's not it, you also have a missing '/research project' after '/My Drive' in the line before the last
with the %cd operation you already moved yourself to [...]/Coffee_room/testdata, so when you try and os.chdir command, it throws an error. At least I think so, the screenshot doesn't let me copy the code to try and recreate the same situation, so it's a bit hard
Try to put your code in the right format inside the question like this
print('Hello, this is my code')

Unix remove old files based on date from file name

I have filenames in a directory like:
ACCT_GA12345_2015-01-10.xml
ACCT_GA12345_2015-01-09.xml
ACCT_GDC789g_2015-01-09.xml
ACCT_GDC567g_2015-01-09.xml
ACCT_GDC567g_2015-01-08.xml
ACCT_GCC7894_2015-01-01.xml
ACCT_GCC7894_2015-01-02.xml
ACCT_GAC7884_2015-02-01.xml
ACCT_GAC7884_2015-01-01.xml
I want to have only the latest file in the folder. The latest file can be found using only the file name (NOT the date stamp). For example ACCT 12345 has files from 1/10 & 1/09. I need to delete 1/09 file and have only 1/10 file, for ACCT 789g there is only one file so I have to have that file, and ACCT 567g the latest file is 1/09 so I have to remove 1/08 and have 1/09. So the combination for latest file should be ACCT & Max date for that ACCT.
I would need the final list of files as:
ACCT_GA12345_2015-01-10.xml
ACCT_GDC789g_2015-01-09.xml
ACCT_GDC567g_2015-01-09.xml
ACCT_GCC7894_2015-01-02.xml
ACCT_GAC7884_2015-02-01.xml
Can someone help me with this command in unix? Any help is appreciated
I'd do something like this.... to test start with ls command, when you get what you want to delete, then do rm.
ls ACCT_{GDC,GA1}*-{09,10}.xml
this will list any GDC or GA1 files that end in 09 or 10. You can play with combinations and different values until you have the right set of files showing that you want deleted. once you to just change ls to rm and you should be golden.
With some more info I could help you out. To test this out I did:
touch ACCT_{GDC,GA1}_{01..10}_{05..10}.xml
this will make 56 different dummy files with different combinations. Make a directory, run this command, and get your hands dirty. That is the best way to learn linux cli. Also 65% of commands you need, you will learn, understand, use then never use again...so learn how to teach yourself how to use man pages and setup a spot to play around in.

Unix SQLLDR scipt gives 'Unexpected End of File' error

All, I am running the following script to load the data on to the Oracle Server using unix box and sqlldr. Earlier it gave me an error saying sqlldr: command not found. I added "SQLPLUS < EOF", it still gives me an error for unexpected end of file syntax error on line 12 but it is only 11 line of code. What seems to be the problem according to you.
#!/bin/bash
FILES='ls *.txt'
CTL='/blah/blah1/blah2/name/filename.ctl'
for f in $FILES
do
cat $CTL | sed "s/:FILE/$f/g" >$f.ctl
sqlplus ID/'PASSWORD'#SERVERNAME << EOF sqlldr SCHEMA_NAME/SCHEMA_PASSWORD control=$f.ctl data=$f EOF
done
sqlplus will never know what to do with the command sqlldr. They are two complementary cmd-line utilities for interfacing with Oracle DB.
Note NO sqlplus or EOF etc required to load data into a schema:
#!/bin/bash
#you dont want this FILES='ls *.txt'
CTL_PATH=/blah/blah1/blah2/name/'
CTL_FILE="$CTL_PATH/filename.ctl"
SCHEMA_NM=SCHEMA_NAME
SCHEMA_PSWD=SCHEMA_PASSWORD
for f in *.txt
do
# don't need cat! cat $CTL | sed "s/:FILE/$f/g" >"$f".ctl
sed "s/:FILE/$f/g" "$CTL_FILE" > "$CTL_PATH/$f.ctl"
#myBad sqlldr "$SCHEMA_NAME/$SCHEMA_PASSWORD" control="$CTL_PATH/$f.ctl" data="$f"
sqlldr $SCHEMA_USER/$SCHEMA_PASSWORD#$SERVER_NAME control="$CTL_PATH/$f.ctl" data="$f" rows=10000 direct=true errors=999
done
Without getting too philosophical, using assignments like FILES=$(ls *.txt) is a bad habit to get into. By contrast, for f in *.txt will deal correctly for files with odd characters in them (like spaces or other syntax breaking values). BUT the other habit you do want to get into is to quote all variable references (like $f), with dbl-quotes : "$f", OK? ;-) This is the otherside of protection for files with spaces etc embedded in them.
In the edit update, I've varibalized your CTL_PATH and CTL_FILE. I think I understand your intent, that you have 1 std CTL_FILE that you pass thru sed to create a table specific .ctl file (a good approach in my experience). Note that you don't need to use cat to send a file to sed, but your use to create a altered file via redirection (> $f.ctl) is very shell-like too.
In 2nd edit update, I looked here on S.O. and found an example sqlldr cmdline that has the correct syntax and have modified to work with your variable names.
To finish up,
A. Are you sure the Oracle Client package is installed on the machine
that you are running your script on?
B. Is the /path/to/oracle/client/tools/bin included in your working
$PATH?
C. try which sqlldr. If you don't get anything, either its not
installed or its not in the path.
D. If not installed, you'll have to get it installed.
E. Once installed, note the directory that contains the sqlldr cmd.
find / -name 'sqlldr*' will take a long time to run, but it will
print out the path you want to use.
F. Take the "path" part of what is returned (like
/opt/oracle/11.2/client/bin/ (but not the sqlldr at the end), and
edit script at 2nd line with
(Txt added to appease the S.O. Formatter ;-) )
export ORCL_PATH="/path/you/found/to/oracle/client"
export PATH="$ORCL_PATH:$PATH"
These steps should solve any remaining issues. If this doesn't work, see if there is someone where you work that understands your local computing environment that can help explain any missing or different steps.
IHTH

UNIX - Finding all empty files in source directory & finding files edited X days ago

The following is from a school assignment.
The question asks
"What command enables you to find all empty files in your source
directory."
My answer; find -size 0 However, my instructor says that my answer is incorrect. The only hint (Regarding the entirety of the assignment) he gives me is "...minor errors such as missing a file name or outputting too much information" I was thinking, perhaps I should include the source directory within my find command.
I've been trying to figure this out for the past few hours. I've referenced my textbook and according to that I should be correct.
There's some other questions I'm having similar issues with. I've wracked my brain with this for hours. I just don't know. I don't understand what I'm doing wrong.
Since your assignment was to find all empty files in your source directory, the following command will do exactly what you want:
find . -size 0
Notice the dot (.) to tell the command to search in the current folder.
For other folders, you replace the "dot" with the folder you want.

Cannot compile. No space left on device error - Unix

I'm coding for class and when I try to compile I now get this error. Worked fine yesterday. IT does not work weekends so I'm out of luck until Monday unless someone can help. I'm fairly new to unix as I only really use it when coding.
cc scheduler.c
Close failure on scheduler.o : No space left on device
cc: acomp failed for scheduler.c
Delete some things on your disk. Use df (type 'man df' for usage) to see where the mount point is that you're compiling onto.
Check out the quota command. It'll show you how much space you get. You've probably just used up all the disk space you're allotted for you account. Go to your home directory:
> cd ~
and run:
> du -sh *
It will show how much space each of your directories takes up. Just remove some unused files.
If one directory takes up most of the space, you can cd into it and run du -sh * there too, to see the disk usage of its subdirectories. It's basically just a useful command for finding large files that you might not need anymore. For example, if you downloaded a really big program for a class project last year, but no longer need it, just rm it.

Resources