Using praat executable, I can write TextGrid Interval in a text file by clicking on To TextGrid (vuv) button in the right panel in the following image. I'm using To TextGrid (vuv)... 0.02 0.01 code to in the praat script but getting "Command To TextGrid (vuv)..." not available for current selection error.
Am I missing something?
Can It be possible to do so using praat script at all?
This may help.
directory$ = "./"
list = Create Strings as file list... list 'directory$'*.wav
numberOfFiles = Get number of strings
if !numberOfFiles
exit There are no sound files in the folder!
endif
for current_file from 1 to numberOfFiles
select list
fileName$ = Get string... current_file
name$ = fileName$ - ".wav" - ".wav"
sound = Read from file... 'directory$''fileName$'
# min and max pitch
pulses = To PointProcess (periodic, cc)... 30 400
vuv = To TextGrid (vuv)... 0.02 0.01
Save as text file... 'directory$''name$'.TextGrid
plus pulses
plus sound
Remove
endfor
In addition, you can join praat-users group on Yahoo where Paul - one of the authors of Praat is very active answering questions. You can learn more about praat scripting on the praat manual here.
http://www.fon.hum.uva.nl/praat/manual/Scripting.html
Related
I have been obtaining .zip archives of genome annotation from NCBI (mainly gff files). In order save disk space I prefer not to unzip the archive, but to read these files directly into R using unz(). However, it seems that unz() is unable to extract files from the end of 'large' zip files:
ncbi.zip <- "file_location/name.zip"
files <- unzip(ncbi.zip, list=TRUE)
gff.files <- files$Name[ grep("gff$", files$Name) ]
## this works
gff.128 <- readLines( unz(ncbi.zip, gff.files[128]) )
## this gives an empty data structure (read.table() stops
## with an error saying no lines or similar
gff.129 <- readLines( unz(ncbi.zip, gff.files[129]) )
## there are 31 more gff files after the 129th one.
## no lines are read from any of these.
The zip file itself seems to be fine; I can unzip the specific files using unzip on the command line and unzip -t does not report any errors.
I've tried this with R versions 3.5 (openSuse Leap 15.1), 3.6, and 4.2 (centOS 7) and with more than one zip file and get exactly the same result.
I attached strace to R whilst reading in the 128 and 129th file. In both cases I get a lot of lseek towards the end of file (offset 2845892608, larger than 2^31) to start with. This is where I assume the zip directory can be found. For the 128th file (the one that can be read), I eventually get an lseek to an offset slightly below 2^31, followed by a set of lseeks and reads (that extend beyone 2^31).
For the 129th file, I get the same reads towards the end of the file, but then rather than finding a position within the file I get:
lseek(3, 2845933568, SEEK_SET) = 2845933568
lseek(3, 4294963200, SEEK_SET) = 4294963200
read(3, "", 4096) = 0
lseek(3, 4095, SEEK_CUR) = 4294967295
read(3, "", 4096) = 0
Which is a bit weird since the file itself is only about 2.8 GB. 4294967295, is of course 2^32 - 1.
To me this feels like an integer overflow bug, and I am considering to post a bug report. But am wondering if anyone has seen something similar before or if I am doing something stupid.
Having done what I should have started with (reading the specification for the zip64 format specification), it's actually clear that this is not an integer overflow error.
Zip files contain a central directory at the end of the archive; this contains amongst other things the names of the compressed files and the offset of the compressed data in the zip archive. The offset (and file size fields) are only given 4 bytes each in the standard directory field; when the offset is larger than this it should instead be given in the extra fields section and the value in the standard field should be set to 0xFFFFFFFF. Since this is the offset that gets used when reading the file it seems clear that the problem lies in the parsing of the extra field.
I had a look at the source code for R 4.2.1 and it seems that the problem is due to the way the offset specified in the standard offset field is tested:
if(file_info.uncompressed_size == (ZPOS64_T)(unsigned long)-1)
changing this == 0xFFFFFFFF seems to fix the problem.
I've submitted a bug report to R. Hopefully changing the check will not have any unintended consequences and the issue will be fixed.
Still, I'm curious as to whether anyone else has come across the same issue. Seems a bit unlikely that my experience is unique.
im working on a mtg auto sorter and some of the cards have interesting names that python seems to not want to find. i am looking for a file (that i know i have in the right spot) called 8_JÃtun_Grunt.png. using this...
for card_name in card_names:
# Fetch the image - name can be found based on the card's information
card_info['name'] = card_name
img_name = '%s/card_img/png/%s/%s_%s.png' % (Config.data_dir, card_info['set'],
card_info['collector_number'],
fetch_data.get_valid_filename(card_info['name']))
card_img = cv2.imread(img_name)
# If the image doesn't exist, download it from the URL
if card_img is None:
fetch_data.fetch_card_image(card_info,
out_dir='%s/card_img/png/%s' % (Config.data_dir, card_info['set']))
card_img = cv2.imread(img_name)
if card_img is None:
print('WARNING: card %s is not found!' % img_name)
the error i get is so
error from cmd
this leads me to think that it cant recognize the file name but im reading it from a database that i cant change. any ideas.
I wouldn't be surprised if OpenCV couldn't handle filepaths with unicode caracters.
you could try to add the code from the answer of this SO question
Problem
I have 100+ *.docx files created with Microsoft Word on a Windows machine that I would like to use with LibreOffice Writer.
Unfortunately, somehow all the tables have been squished in Writer as shown:
I've tried to fix this by:
Selecting the entire table (ctrl-a, ctrl-a)
Right-click the table
Go to size
Click optimal column width
This indeed gives me the desired result:
What I've tried so far
failed approach 1
I've created the following macro that does the formatting for a single table:
REM ***** BASIC *****
sub setTableColumns
rem ----------------------------------------------------------------------
rem define variables
dim document as object
dim dispatcher as object
rem ----------------------------------------------------------------------
rem get access to the document
document = ThisComponent.CurrentController.Frame
dispatcher = createUnoService("com.sun.star.frame.DispatchHelper")
dispatcher.executeDispatch(document, ".uno:SelectAll", "", 0, Array())
dispatcher.executeDispatch(document, ".uno:SelectAll", "", 0, Array())
dispatcher.executeDispatch(document, ".uno:SetOptimalColumnWidth", "", 0, Array())
end sub
I've assigned a keyboard shortcut to this macro such that I can reformat a single table by:
Clicking on the table of interest.
Running the shortcut.
This is an improvement, but it still requires a lot of manual work.
failed approach 2
I've also tinkered with the examples given in this different SO question.
I've managed to set the relative width of all tables to 100% by changing this property of all my tables:
https://www.openoffice.org/api/docs/common/ref/com/sun/star/text/TextTable.html#RelativeWidth
I did this by using the following macro (snippet):
tables = ThisComponent.TextTables
for tid = 0 to tables.count - 1
table = tables(tid)
table.RelativeWidth = 100
next
This does widen all the tables, however the format is not desirable.
Question
Is there a way to apply the optimal column width setting to all tables in a file?
It would be awesome if I could apply it to all tables in multiple docx files at once.
However, it would already make me very happy if I could automatically format all tables in a single docx file.
The code needs to loop through all tables in the document, select each one and then optimize the selected table.
The following Basic code from Andrew Pitonyak's macro document section 7.2.1 shows how to select a table.
ThisComponent.getCurrentController().select(oTable)
Here is some python code from a project of mine.
cellsRange = table.getCellRangeByPosition(
0, 0, endCol, numRows - 1)
controller.select(cellsRange)
dispatcher.executeDispatch(
frame, ".uno:SetOptimalColumnWidth", "", 0, ())
There are also ways to loop through a set of documents, so the code can open, optimize, save and close each document. The most difficult part is closing properly without a crash, as closing events can sometimes interrupt each other.
If this is a problem, you may find it easier to open, save and close each by hand, running the macro to optimize all tables in the currently open document. It's also not too hard to loop through all open documents, if you find it more convenient to open multiple files at the same time.
I'm having issues when trying to read in a binary file I've previously written into another program. I have been able to open it and read it to an array with out compilation errors, however, the array is not populated (all 0's). Any suggestions or thoughts would be great. Here is the open/read statement I'm using:
allocate(dummy(imax,jmax))
open(unit=io, file=trim(input), form='binary', access='stream', &
iostat=ioer, status='old', action='READWRITE')
if(ioer/=0) then
print*, 'Cannot open file'
else
print*,'success opening file'
end if
read(unit=io, fmt=*, iostat=ioer) dummy
j=0
k=0
size: do j=1, imax
do k=1, jmax
if(dummy(j,k) > 0.) print*,dummy(j,k)
end do
end do size
Please let me know if you need more info.
Here is how the file is originally written:
out_file = trim(output_dir)//'SEVIRI_FRP_.08deg_'//trim(season)//'.bin'
print*, out_file
print*, i_max,' i_max,',j_max,' j_max'
open (io, file = out_file, access = 'direct', status = 'replace', recl = i_max*j_max*4)
write(io, rec = 1) sev_frp
write(io, rec = 2) count_sev_frp
write(io, rec = 3) sum_sev_frp
check: do n=1, i_max
inna: do m=1, j_max
!if (sev_frp(n,m) > 0) print*, count_sev_frp(n,m)
end do inna
end do check
print*,'n-',n,'m-',m
close(io)
First of all the form takes two possible values as far as I know: "FORMATTED" or "UNFORMATTED".
Second, to read, you should use a open that is symmetric to the open statement that you used to write the file, Unless you know exactely what you are doing. I suggest that for reading, you open with:
open(unit=io, file=trim(input), access='direct', &
iostat=ioer, status='old', action='READ', recl = i_max*j_max*4)
That corresponds to the open statement that you used to save the file.
As innoSPG says, you have a mismatch in the way the file is written and how it is read.
An external file may be connected with one of three access methods: sequential; direct; stream. Further, a connection may be formatted or unformatted.
When the file is opened for writing it uses the direct access method with unformatted records. The records are unformatted because this is the default (in the abscence of the form= specifier).
When you open the file for reading you use the non-standard extension of form="binary" and stream access. There is possibly nothing wrong with this, but it does require care.
However, with the read statements you are using formatted (list-directed) input. This will not be allowed.
The way suggested in the previous answer, of using a similar access method and record length will require a further change to the code. [You'll also need to set the value of the record length somehow.]
Not only will you need to remove the format, to match the unformatted records written, but you'll want to use the rec= specifier to access the records of the file.
Finally, if you are using the iostat= specifier you really should check the resulting value.
I'm writing a program that performs several tests on a hardware unit, and logs both the results of each test and the steps taken to perform the test. The trick is that I want the program to log these results to a text file as they become available, so that if the program crashes the results that had been obtained are not lost, and the log can help debug the crash.
For example, assume a program consisting of two tests. If the program has finished the first test and is working on the second, the log file would look like:
Results:
Test 1 Result A: Passed
Test 1 Result B: 1.5 Volts
Log:
Setting up instruments.
Beginning test 1.
[Steps in test 1]
Finished test 1.
Beginning test 2.
[whatever test 2 steps have been completed]
Once the second test has finished, the log file would look like this:
Results:
Test 1 Result A: Passed
Test 1 Result B: 1.5 Volts
Test 2 Result A: Passed
Test 2 Result B: 2.0 Volts
Log:
Setting up instruments.
Beginning test 1.
[Steps in test 1]
Finished test 1.
Beginning test 2.
[Steps in test 2]
Finished test 2.
All tests complete.
How would I go about doing this? I've been looking at the help files for QFile and QTextStream, but I'm not seeing a way to insert text in the middle of existing text. I don't want to create separate files and merge them at the end because I'd end up with separate files in the event of a crash. I also don't want to write the file from scratch every time a change is made because it seems like there should be a faster, more elegant way of doing this.
QFile.readAll will read the entire file into a QByteArray.
On the QByteArray you can then use insert to insert text in the middle,
and then write it back to file again.
Or you could use the classic c style that can modify files in the middle with the help of filepointers.
As #Roku pointed out, there is no built in way to insert data in a file with a rewrite. However if you know the size of the region, i.e., if the text you want to write has a fixed length, then you can write an empty space in the file and replace it later. Check
this discussion in overwriting part of a file.
I ended up going with the "write the file from scratch" method that I mentioned being hesitant about in my question. The benefit of this technique is that it results in a single file, even in the event of a crash since the log and the results are never placed in different files to begin with. Additionally, rewriting the file only happens when adding new results (an infrequent occurrence), whereas updating the log means simply appending text to the file as usual. I'm still a bit surprised that there isn't a way to have the OS insert text into a file for you.
Oh, and for those of you who absolutely must have this functionality as efficiently as possible, the following might be of use:
http://www.codeproject.com/Articles/17716/Insert-Text-into-Existing-Files-in-C-Without-Temp
You just cannot add more stuff in the middle of a file. I would go with two separate files, another for the results and another for the logs.