I'm working with Progress-4GL, and in there I'm dealing with quite a large table (+130 field, +8000 tuples).
In order to view that table, I've decided to dump the entries of that table and import this to Excel. This seems quite easy: just use "Data Administration", menu "Admin", menu item "Dump data and definition", "Table contents" (file extension: *.d).
But there's a catch: in the resulting *.d file, the fields are separated using a space, but there are some spaces in some fields, making the result "unimportable" in Excel. Therefore I'd like to separate the fields by a character which is certainly not used in that table (the pipe character ("|") is perfect for that.
Does anybody know how I can tell my general framework (OpenEdge Desktop, release 11.6, not any "Studio" IDE) to use another character as a separator in the *.d table contents dump?
Most probably this is to be done, using an entry in the Progress.ini file (as the answer of this question), but I don't know which entry I'm dealing with.
It would also be nice to have the column headers in that *.d table contents dump, is that even possible?
Edit
Meanwhile I've found out that some of the character fields contain newlines, causing the table content dump to be distributed over different lines. This, of course, makes the whole thing unusable. Is there any way not to go to a new line in the *.d file?
Thanks in advance
My favorite data format is json, unfortunately Excel cannot read json out of the box, but it can read xml.
Unfortunately, the Progress ABL methods write-json and write-xml do not work on database buffers, so you need to create a temp-table like your database table, buffer-copy the records across and then write-xml on the temp-table.
def var ctable as char no-undo initial "customer".
def var hq as handle no-undo.
def var ht as handle no-undo.
def var hbtt as handle no-undo.
def var hbdb as handle no-undo.
create buffer hbdb for table ctable.
create temp-table ht.
ht:create-like( hbdb ).
ht:temp-table-prepare( ctable ).
hbtt = ht:default-buffer-handle.
create query hq.
hq:add-buffer( hbdb ).
hq:query-prepare( "for each " + hbdb:name ).
hq:query-open().
do while hq:get-next( no-lock ):
hbtt:buffer-create().
hbtt:buffer-copy( hbdb ).
end.
ht:write-xml( "file", ctable + ".xml", true ).
Note that I am not cleaning up after myself.
You can add a filter to the query if necessary.
You can omit fields in the copy if necessary.
Check Admin->Export Data->Text menu item in Data Administration tool. You can select "All" for "Fields to Export" option. The "Field Delimiter" can be left empty and choose "|" as "Field Separator" (or any character of your choice). The only thing where this might fail is when a field's value is distributed over different lines.
Also, there is a KB article about this. It might help: http://knowledgebase.progress.com/articles/Article/P8426
You will have to write code to do this yourself. The EXPORT statement is your friend.
You can use something like the below. This is basically what the Data Dictionary does.
OUTPUT TO 'customer.psv'. // choose your filename
FOR EACH Customer NO-LOCK:
EXPORT DELIMITER '|' Customer.
END.
OUTPUT CLOSE.
If you wanted to write this generically/dynamically there's quite a bit more to do, and you'll probably end up using buffer and buffer-field handles, and PUT UNFORMATTED .
Related
I need to create trigger dynamically and don't need drop them in future.
So, I need a code to do this.
Likely
CREATE TRIGGER random() BEFORE INSERT... (with random name)
Or
CREATE TRIGGER BEFORE INSERT... (without name)
Can I do this in sqlite shell?
I know, it's bad practise, but it's experiment.
Thanks.
I'm afraid that's not possible. According to the documentation, trigger-name is an atomic syntactical unit (it can be seen from it from being lowercase, in a rounded-cornered rectangle), in the sense it cannot be constructed by evaluating complex expressions. You are only allowed to enter a literal there (same with table-name and index-name by the way, see here and here).
What you can do instead is dynamically constructing the whole query string, before passing it to SQLite. E.g. if you are interacting with SQLite through Python, then you can write something like:
tableName = "someRandomString"
db.execute("CREATE TABLE " + tableName + " (A INT, B TEXT)")
Or if you are using the Windows command prompt:
set tableName=someRandomString
sqlite3.exe test.sqlite "CREATE TABLE %tableName% (A INT, B TEXT)"
I know you were asking about triggers, not tables but it's basically the same thing from your question's perspective, and the table creation syntax is shorter.
I just want to design new report in progress app builder.
Example:
I just want to pass employee number as a input parameter and get the respective employee details in excel sheet. for that how to write code in progress. please see the image.
Thanks,
Of course I've no idea how your db is actually organized but something like the following will export all of the "employee" records matching the employee number passed to the procedure. The output will be a CSV file suitable for Excel to open. It should give you some ideas about the syntax that you might like to start with.
define variable employeeNum as integer no-undo.
update employeeNum.
run exportEmployees ( input employeeNum ).
procedure exportEmployee:
define input parameter empNum as integer no-undo.
define buffer employee for employee.
output to value( "employee.csv" ).
for each employee no-lock where employee.employeeId = empNum:
export delimiter "," employee.
end.
output close.
return.
end.
A simple way to get data to excel is to use EXPORT.
Ie
OUTPUT TO file.csv.
FOR EACH Customer NO-LOCK:
EXPORT DELIMITER ";" Customer.
END.
OUTPUT CLOSE.
That will export without any formatting though.
Depending on your version of excel you can try com-handles to directly interact with it instead of opening a saved file.
http://knowledgebase.progress.com/articles/Article/21671
Your code would be similar to the example above, but you would change the /* Add Data */ section to something like the following.
FOR EACH EMP NO-LOCK
WHERE EMP.EMPNO = INTEGER(EMPNO:SCREEN-VALUE):
/* Add data */
ASSIGN chWorksheet:Range("B1"):VALUE = EMP.ENAME
chWorksheet:Range("B2"):VALUE = EMP.SAL.
END.
This assumes the fill-in is called "EMPNO" and your EMPNO field is an integer.
The query should work on SQLITE MANAGER for Firefox:
The Problem, in side a row in one table of the database often is standing H(any numbers up to 9000) to times and it should only stand one time there. For example H6523H6523 and it should only stand H6523. This field is haveing a lot of text in, and inside this text the double H numbers are appearing.
The H6523 are also in another table in a seperate column. So it is possible to get the list after what must be looked.
Table one is content, and the column in which it is wrong is data (long text)
Table two is topics, and the column in which the H6523 is standing is subject. (only the H+number).
with the replace command it should work, but I would have to make a replace command for each H+number seperately.
So with triggers it should work.
But it does not work :(
The trigger step I set:
update content sET data=replace( (Select topics.sub2 From Topics), (select topics.subject from topics));
SQLite is designed as an embedded database, so it does not have much support program logic.
You have to write your replacement code in any other language that has better support for text processing.
I have a sqlite database from which I want to extract a column of information with the datatype BLOB. I am trying this:
SELECT cast(data as TEXT) FROM content
This is obviously not working. The output is garbled text like this:
x��Uak�0�>�8�0Ff;I�.��.i%�A��s�M
The data in the content column is mostly text, but may also have images (which I recognized could cause a problem if I cast as TEXT). I simply want to extract that data into a usable format. Any ideas?
You can use
SELECT hex(data) FROM content
or
SELECT quote(data) FROM content
The first will return a hex string (ABCD), the second quoted as an SQL literal (X'ABCD').
Note that there's (currently) no way of converting hexadecimal column information back to a BLOB in SQLite. You will have to use C/Perl/Python/… bindings to convert and import those.
You can write some simple script which will save all blobs from your database into files. Later, you can take a look at these files and decide what to do with them.
For example, this Perl script will create lots of files in current directory which will contain your data blob fields. Simply adjust SELECT statement to limit fetched rows as you need:
use DBI;
my $dbh = DBI->connect("dbi:SQLite:mysqlite.db")
or die DBI::errstr();
my $sth = $dbh->prepare(qq{
SELECT id, data FROM content
});
$sth->execute();
while (my $row = $sth->fetchrow_hashref()) {
# Create file with name of $row->{id}:
open FILE, ">", "$row->{id}";
# Save blob data into this file:
print FILE $row->{data};
close FILE;
}
$dbh->disconnect();
This is my problem:
I'm reading data from an Excel file on a .NET MVC app, what I'm doing is to read all data from the excel and then loop over each record inserting the data contained in the record into my business model.
All works perfectly. However, I've found that one field, sometimes, return an empty string when retrieved from the excel. Curiously this field can contain a simple string or a string that will be treated as an array (it can include '|' characters to build the array) on some excel files the field returns empty when the '|' char is present and in others when it isn't, and this behaviour is consistent all along that file.
There are other fields that can receive the separator and work always ok. The only difference between both fields are that the working ones are pure strings and the one that's failing is a string of numbers with possibles '|' separating them.
I've tried to change the separator character (I tried with '#' with same results) and to specifically format the cells as text without any success.
This is the method that extracts data from the excel
private DataSet queryData(OleDbConnection objConn) {
string strConString = "SELECT * FROM [Hoja1$] WHERE NUMACCION <> ''";
OleDbCommand objCmdSelect = new OleDbCommand(strConString, objConn);
OleDbDataAdapter objAdapter1 = new OleDbDataAdapter();
objAdapter1.SelectCommand = objCmdSelect;
DataSet objDataset = new DataSet();
objAdapter1.Fill(objDataset, "ExcelData");
return objDataset;
}
I first check the fields from the excel with:
fieldsDictionary.Add("Hours", table.Columns["HOURS"].Ordinal);
And later, when looping through the DataSet I extract data with:
string hourString = row.ItemArray[fieldsDictionary["Hours"]].ToString();
This hourString is empty in some records. In some Excel files it's empty when the record contains '|', on others it's empty when it doesn't. I haven't found yet a file where it returns empty on records of both classes.
I'm quite confused about this. I'm pretty sure it has to be related to the numerical nature of field data, but cannot understand why it doesn't solve when I force the cells on the excel file to be "text"
Any help will be more than welcome.
Ok. I finally solved this.
It seems like Excel isn't able to recognize a whole column as same data type if it contains data of possibly different classes. This happens even if you force the cell format to be text on the workbook, as when you query the data it will recognize the field as a determinated type according to the first record it receives; that was the reason why different files emptied different type of records, files starting with a plain text emptied numeric values and vice versa.
I've found a solution to this just changing the connection string to Excel.
This was my original connection string
Provider=Microsoft.Jet.OLEDB.4.0;Data Source=pathToFile;Extended Properties="Excel 8.0;HDR=Yes;"
And this the one that fixes the problem
Provider=Microsoft.Jet.OLEDB.4.0;Data Source=pathToFile;Extended Properties="Excel 8.0;HDR=Yes;IMEX=1"
The parameter IMEX=1 states to excel that it must manage all mixed data columns as plain text. This won't work for you if you need to edit the excel file, as this parameter also opens it on read-only mode. However it was perfect for my situation.