Are there restricted characters in ADO varchars? - asp-classic

We have a simple file browser on our intranet, built using ASP/vbscript. The files are read by the script and added to an ADO Recordset (not connected to a database), so we can sort the contents easily:
Set oFolderContents = oFolder.Files
Set rsf = Server.CreateObject("ADODB.Recordset")
rsf.Fields.Append "name", adVarChar, 255
rsf.Fields.Append "size", adInteger
rsf.Fields.Append "date", adDate
rsf.Fields.Append "type", adVarChar, 255
rsf.Open
For Each oFile In oFolderContents
if not left(oFile.Name, 3) = "Dfs" then 'Filter DFS folders
rsf.AddNew
rsf.Fields("name").Value = oFile.Name
rsf.Fields("size").Value = oFile.Size
rsf.Fields("date").Value = oFile.DateCreated
rsf.Fields("type").Value = oFile.Type
end if
Next
In one particular folder we are getting an error:
Microsoft Cursor Engine error '80040e21'
Multiple-step operation generated errors. Check each status value.
This points to the line
rsf.Fields("name").Value = oFile.Name
in the code above.
My initial thought this was caused by a long file name, but I checked the length of all the files in the directory - although some are quite long, all are under the 255 character limit set above (largest is 198 characters long).
The folder in question has nearly 2000 PDFs in it, and I do not have permissions to alter the contents, just read (it's a technical library). The files have a naming convention of "ID# - Paper Title". Some have special characters such as ', &, and ( or ) - could some of these be causing the issue? I don't recall having such a problem before. I tried searching Google for special characters in ADO, but couldn't find anything which seemed relevant.
Thanks :-)

Have you tried using adVarWChar for the name column?

Related

Failure of unz() to unzip from a zip file offset of more than 2^31 bytes

I have been obtaining .zip archives of genome annotation from NCBI (mainly gff files). In order save disk space I prefer not to unzip the archive, but to read these files directly into R using unz(). However, it seems that unz() is unable to extract files from the end of 'large' zip files:
ncbi.zip <- "file_location/name.zip"
files <- unzip(ncbi.zip, list=TRUE)
gff.files <- files$Name[ grep("gff$", files$Name) ]
## this works
gff.128 <- readLines( unz(ncbi.zip, gff.files[128]) )
## this gives an empty data structure (read.table() stops
## with an error saying no lines or similar
gff.129 <- readLines( unz(ncbi.zip, gff.files[129]) )
## there are 31 more gff files after the 129th one.
## no lines are read from any of these.
The zip file itself seems to be fine; I can unzip the specific files using unzip on the command line and unzip -t does not report any errors.
I've tried this with R versions 3.5 (openSuse Leap 15.1), 3.6, and 4.2 (centOS 7) and with more than one zip file and get exactly the same result.
I attached strace to R whilst reading in the 128 and 129th file. In both cases I get a lot of lseek towards the end of file (offset 2845892608, larger than 2^31) to start with. This is where I assume the zip directory can be found. For the 128th file (the one that can be read), I eventually get an lseek to an offset slightly below 2^31, followed by a set of lseeks and reads (that extend beyone 2^31).
For the 129th file, I get the same reads towards the end of the file, but then rather than finding a position within the file I get:
lseek(3, 2845933568, SEEK_SET) = 2845933568
lseek(3, 4294963200, SEEK_SET) = 4294963200
read(3, "", 4096) = 0
lseek(3, 4095, SEEK_CUR) = 4294967295
read(3, "", 4096) = 0
Which is a bit weird since the file itself is only about 2.8 GB. 4294967295, is of course 2^32 - 1.
To me this feels like an integer overflow bug, and I am considering to post a bug report. But am wondering if anyone has seen something similar before or if I am doing something stupid.
Having done what I should have started with (reading the specification for the zip64 format specification), it's actually clear that this is not an integer overflow error.
Zip files contain a central directory at the end of the archive; this contains amongst other things the names of the compressed files and the offset of the compressed data in the zip archive. The offset (and file size fields) are only given 4 bytes each in the standard directory field; when the offset is larger than this it should instead be given in the extra fields section and the value in the standard field should be set to 0xFFFFFFFF. Since this is the offset that gets used when reading the file it seems clear that the problem lies in the parsing of the extra field.
I had a look at the source code for R 4.2.1 and it seems that the problem is due to the way the offset specified in the standard offset field is tested:
if(file_info.uncompressed_size == (ZPOS64_T)(unsigned long)-1)
changing this == 0xFFFFFFFF seems to fix the problem.
I've submitted a bug report to R. Hopefully changing the check will not have any unintended consequences and the issue will be fixed.
Still, I'm curious as to whether anyone else has come across the same issue. Seems a bit unlikely that my experience is unique.

ASP File Object Issue

I am working on an ASP classic website, which the client reported suddenly exhibited an issue with a previously working function which listed only "image" file types. Upon reading the code, I found that the loop that lists the files in a folder, uses the InStr() function to identify files by their type, which should be "image." However, I found that something must have changed in the OS, as the type is not longer "image", but "JPG", or "PNG", etc. This dramatically changes the way the code works. Following is a snipped of the code:
Set oFSO = Server.CreateObject("Scripting.FileSystemObject")
Set oFolder = oFSO.GetFolder(Server.MapPath(sCurrentDirectoryPath))
Set oSubFolder = oFolder.Files
iFileCount = 0
For Each oFileName in oSubFolder
If InStr(1, LCase(oFileName.Type),"image") > 0 Then
iFileCount = iFileCount + 1
End If
Next
Because the InStr() function is trying to find a file type of "image", no files are counted up, and the function returns zero files found. Whilst debugging, I found that the value being returned by oFileName.Type, was as follows:
This is the file type:JPG File
This is the file type:JPG File
This is the file type:Text Document
This is the file type:Data Base File
Files in the folder were two "whatever.jpg" files, a "whatever.txt" file, and a "thumbs.db" file. So, it appears that the OS (Windows Server 2019) may have changed to be less generic with reporting an "image" file, and is now reporting "JPG file" or "PNG file", etc. This of course, breaks this code! Are there any suggestions from you'all on how I could go about modifying this code to work on reporting exactly how many image files are present?
On Windows 10, the Type values for .jpg and .png files are JPEG image and PNG image respectively. What OS are you using?
Also, Type doesn't actually analyze the file, you could have a virus.exe file in the folder that has been renamed to virus.jpg, and the Type value will still show it as JPEG. So if the function is indented to check user uploaded content to ensure images are actually images, the Type value will be of no use. If you have root access you could install a COM DLL that uses a program such as ExifTool to properly analyze files (https://github.com/as08/ClassicASP.ExifTool), however that would be a complete rewrite.
But assuming you're not looking to check if an image file is actually an image file, you could just split the filename extensions and use Select Case to count the image files if your OS is returning just XXX file and no longer XXX image in the Type value (alternatively you could split the Type value, but you'd still need to check for valid image file extensions):
Dim oFSO, oFolder, oSubFolder, oFileName, iFileCount, oFileNameSplit
Set oFSO = Server.CreateObject("Scripting.FileSystemObject")
Set oFolder = oFSO.GetFolder(Server.MapPath(sCurrentDirectoryPath))
Set oSubFolder = oFolder.Files
iFileCount = 0
For Each oFileName In oSubFolder
oFileNameSplit = Split(oFileName.Name,".")
If uBound(oFileNameSplit) > 0 Then
Select Case Trim(lCase(oFileNameSplit(uBound(oFileNameSplit))))
Case "jpg", "jpeg", "png", "gif" ' Maybe add some more extensions...
iFileCount = iFileCount + 1
End Select
End If
Next
Set oFSO = Nothing
Set oFolder = Nothing
Set oSubFolder = Nothing
Response.Write(iFileCount)

Documentum Document Content Share Location from SQL

I am trying to sort out how to find the physical location of a file on a mapped documentum share. There are several ways to do it using the API or DQL, but neither of those will scale to what we need to migrate data out of the system. Ultimately the plan is to migrate all data out and into a new system, but we need the file locations to plan this out.
The following resources have been helpful:
https://robineast.wordpress.com/2007/01/24/where-is-my-content-stored/
https://community.emc.com/thread/51958?start=0&tstart=0
Running this DQL will give us the location, but the SQL provided does not return any data relevant to what we're trying to accomplish (or anything at all).
execute GET_PATH for '<parent_id_goes_here>'
Result:
t:\documentum\data\schema\storage_volume_number\00000000\80\01\ef\63.xlsx
Additionally, using the API with getpath returns valid data, but when choosing to show the SQL is gives the same query (a little further down) which doesn't actually give the location of the file.
API>getpath,c,<r_object_id>
...
t:\documentum\data\schema\storage_volume_number\00000000\80\01\ef\63.xlsx
This is the query provided with both when you choose 'Show the SQL'.
select a.r_object_id, b.audit_attr_names, a.is_audittrail,
a.event, a.controlling_app, a.policy_id,
a.policy_state, a.user_name, a.message,
a.audit_subtypes, a.priority, a.oneshot,
a.sendmail, a.sign_audit
from dmi_registry_s a, dmi_registry_r b
where a.r_object_id = b.r_object_id and a.registered_id = :p0 and (a.event = 'all' or a.event = 'dm_all' or a.event = :p1)
order by a.is_audittrail desc, a.event desc,
a.r_object_id, b.i_position desc;
:p0 = < parent_id >;
:p1 = dm_getfile
The above query returns nothing in PL/SQL, and removing the :p0/:p1 variables just returns audit data.
Any guidance on how to get this using SQL, or a DQL script that could be written to give the path and r_object_id in a CSV to join? I'm also open to other ideas of pulling data out of this system.
After a lot of digging I found that the best way to go about this is to convert the data ticket into your path. To quote the articles linked in the question:
The trick to determining the path to the content is in decoding the data_ticket's 2’s complement decimal value. Convert the data_ticket to a 2’s compliment hexadecimal number by first adding 2^32 to the number and then converting it to hex. You can use a scientific calculator to do this or grab some Java code off the net.
-2147474649 + 2^32 = (-2147474649 + 4294967296) = 2147492647
converting 2147492647 to hex = 80002327
Now, split the hex value of the data_ticket at every two characters, append it to file_system_path and docbase_id (padded to 8 bits), and add the dos_extension. Viola! you have the complete path to the content file.
C:/Documentum/data/docbase/content_storage_01/0000001/80/ 00/23/27.txt
This PowerShell code will do the conversion for you -- just feed it the data ticket.
$Ticket = -2147474649
$FSTicketInt = $Ticket + [math]::Pow(2, 32)
$FSTicketHex = [Convert]::ToString($FSTicketInt, 16)
$FSTicketPath = ($FSTicketHex -split '(..)' | ? {$_}) -join '\'
Then all you need to do is join the path with the content storage location using [System.IO.Path]::Combine().

Reading in a binary grid file in Fortran 90

I'm having issues when trying to read in a binary file I've previously written into another program. I have been able to open it and read it to an array with out compilation errors, however, the array is not populated (all 0's). Any suggestions or thoughts would be great. Here is the open/read statement I'm using:
allocate(dummy(imax,jmax))
open(unit=io, file=trim(input), form='binary', access='stream', &
iostat=ioer, status='old', action='READWRITE')
if(ioer/=0) then
print*, 'Cannot open file'
else
print*,'success opening file'
end if
read(unit=io, fmt=*, iostat=ioer) dummy
j=0
k=0
size: do j=1, imax
do k=1, jmax
if(dummy(j,k) > 0.) print*,dummy(j,k)
end do
end do size
Please let me know if you need more info.
Here is how the file is originally written:
out_file = trim(output_dir)//'SEVIRI_FRP_.08deg_'//trim(season)//'.bin'
print*, out_file
print*, i_max,' i_max,',j_max,' j_max'
open (io, file = out_file, access = 'direct', status = 'replace', recl = i_max*j_max*4)
write(io, rec = 1) sev_frp
write(io, rec = 2) count_sev_frp
write(io, rec = 3) sum_sev_frp
check: do n=1, i_max
inna: do m=1, j_max
!if (sev_frp(n,m) > 0) print*, count_sev_frp(n,m)
end do inna
end do check
print*,'n-',n,'m-',m
close(io)
First of all the form takes two possible values as far as I know: "FORMATTED" or "UNFORMATTED".
Second, to read, you should use a open that is symmetric to the open statement that you used to write the file, Unless you know exactely what you are doing. I suggest that for reading, you open with:
open(unit=io, file=trim(input), access='direct', &
iostat=ioer, status='old', action='READ', recl = i_max*j_max*4)
That corresponds to the open statement that you used to save the file.
As innoSPG says, you have a mismatch in the way the file is written and how it is read.
An external file may be connected with one of three access methods: sequential; direct; stream. Further, a connection may be formatted or unformatted.
When the file is opened for writing it uses the direct access method with unformatted records. The records are unformatted because this is the default (in the abscence of the form= specifier).
When you open the file for reading you use the non-standard extension of form="binary" and stream access. There is possibly nothing wrong with this, but it does require care.
However, with the read statements you are using formatted (list-directed) input. This will not be allowed.
The way suggested in the previous answer, of using a similar access method and record length will require a further change to the code. [You'll also need to set the value of the record length somehow.]
Not only will you need to remove the format, to match the unformatted records written, but you'll want to use the rec= specifier to access the records of the file.
Finally, if you are using the iostat= specifier you really should check the resulting value.

How to write some amounts to a File?

I'm doing this program for a class; I have a listbox with 4 choices that are counted each time they're selected,and i'm supposed to write out the results to an out.File so that they can then be retrieved and displayed when asked to.
Here's an image,so you can see what i mean.
Here's the code i got for the file part so far:
'declare a streamwriter variable
Dim outFile As IO.StreamWriter
'open file for output
outFile = IO.File.CreateText("Commercials.txt")
'write to file
For intIndex As Integer = 0 To lstCommercial.Items.Count - 1
outFile.WriteLine(lstCommercial.Items(intIndex))
Next intIndex
'close th efile
outFile.Close()
So my problem is this,i can get everything to work except for it to write the totals to the file,with the result of not displaying. How do i go about doing this? What am i doing wrong in any case?
It depends on what lstCommercial.Items is.
If it is a Collection object, then you can get the index of an item using the following:
outFile.WriteLine(lstCommercial.Items.Item(intIndex))
If it is an array, then instead of using the Count property to find the length, you should rather use GetUpperBound(0) to find the last index:
For intIndex As Integer = 0 To lstCommercial.Items.GetUpperBound(0)
I have very little experience in Visual Basic so I am probably wrong, but I have just gathered this from reading Arrays in Visual Basic and Collection Object (Visual Basic).
Also, looking at the docs here, you may need to 'Flush' the buffer to the file before closing it:
outFile.Flush()
'close the file
outFile.Close()

Resources