AssertResultSetsHaveSameMetaData in TSQLT - tsqlt

I am using TSQLT AssertResultSetsHaveSameMetaData to compare metadata between two tables.But the problem is that i cannot hardcode the table name since i am passing the table name as the parameter at the runtime.So is there any way to do that

You use tSQLt.AssertResultSetsHaveSameMetaData by passing two select statements like this:
exec tSQLt.AssertResultSetsHaveSameMetaData
'SELECT TOP 1 * FROM mySchema.ThisTable;'
, 'SELECT TOP 1 * FROM mySchema.ThatTable;';
So it should be quite easy to parameterise the names of the tables you are comparing and build the SELECT statements based on those table name parameters.
However, if you are using the latest version of tSQLt you can also now use tSQLt.AssertEqualsTableSchema to do the same thing. You would use this assertion like this:
exec tSQLt.AssertEqualsTableSchema
'mySchema.ThisTable'
, 'mySchema.ThatTable';
Once again, parameterising the tables names would be easy since they are passed to AssertEqualsTableSchema as parameters.
If you explain the use case/context and provide sample code to explain what you are trying to do you stand a better chance of getting the help you need.

Related

OpenEdge Progress 4GL WRITE-XML NAMESPACE-PREFIX

Hi Progress OpenEdge dev,
I am using the following syntax to generate an XML file from temp table. All is good but for one item.
dataset dsCust:write-xml("FILE", "c:/Test/Customer.xml", true).
This is my temp table declaration
def temp-table ttCustomer no-undo
namespace-uri "http://WMS.URI"
namespace-prefix "ns0"
field PurchaseOrderNumber as char
field Plant as char.
This is my output
<ns0:GoodsReceipt xmlns:ns0="http://WMS.URI">
<ns0:PurchaseOrderNumber/>
<ns0:Plant>Rose</ns0:Plant>
</ns0:GoodsReceipt>
But this is my desired output
<ns0:GoodsReceipt xmlns:ns0="http://WMS.URI">
<PurchaseOrderNumber/>
<Plant>Rose</Plant>
</ns0:GoodsReceipt>
Notice the element inside GoodsReceipt node does not have ns0 prefix.
Can this achived using write-xml? I want to avoid using DOM or SAX if possible.
Thank you
You can always manually set attributes and tag-names using XML-NODE-TYPE and SERIALIZE-NAME.
However: I've worked with lot's of xml:s and API:s together with Progress OpenEdge and have yet to fail based on namespace-problems but I guess it might depend on what you want to do with the data.
Since you don't include the entire dataset this is something of a guess. It produces more or less what you want for this specific case. I don't know how multiple "receipts" should be rendered though so you might need to change this.
DEFINE TEMP-TABLE ttCustomer NO-UNDO SERIALIZE-NAME "ns0:GoodsReceipt"
FIELD xmlns AS CHARACTER SERIALIZE-NAME "xmlns:ns0" INITIAL "http://WMS.URI" XML-NODE-TYPE "ATTRIBUTE"
FIELD PurchaseOrderNumber AS CHARACTER
FIELD Plant AS CHARACTER .
DEFINE DATASET dsCust SERIALIZE-HIDDEN
FOR ttCustomer .
CREATE ttCustomer.
ASSIGN Plant = "Rose".
DATASET dsCust:write-xml("FILE", "c:/temp/Customer.xml", TRUE).
From a quick Google on the subject, it seems the W3C suggests that the namespace prefix should be presented the way OpenEdge does it: https://www.w3schools.com/xml/xml_namespaces.asp
And I'm pretty certain you can't change the behaviour with write-xml like you want to either. The documentation doesn't mention any way of overriding the behaviour. https://documentation.progress.com/output/ua/OpenEdge_latest/index.html#page/dvxml/namespace-uri-and-namespace-prefix.html

Use case statement for parameter for column name in sql where clause

I have been looking all day for a solution that works for my situation. I have found some things that are very similar but don't work for my situation, I tried them.
Here is the scenario; I have two table base and partdetails. I have an asp website (internal ONLY) that has drop down lists to select the parameters for a SQL query that fills a data grid view.
My problem is this, I need to be able, based on the drop down list boxes on the page, assign the column name that the criteria that is entered to be searched for.
Here is the query that I am trying to define: (This one returns 0 rows)
sqlCmd.CommandText = ("Select ba.referenceid, ba.partnum, pd.width, pd.length, CONVERT(varchar(12), pd.dateentered, 101) As [dateentered], ba.partqty, ba.status, ba.material From tbl_dlbase ba Join tbl_partdetails pd On ba.referenceid = pd.referenceid Where Case #field1 When 'part #' Then 'ba.partnum' When 'Spacing' Then 'pd.spacing' When 'Surface' Then 'pd.surface' When 'Height' Then 'pd.height' When 'Thickness' Then 'pd.thickness' End Like '%' + #criteria1 + '%'")
sqlCmd.Parameters.AddWithValue("#field1", ddlSc1.SelectedItem.Text)
sqlCmd.Parameters.AddWithValue("#criteria1", txbCriteria1.Text)
This is the latest version of the SQL statement that I have tried. I need to be able to set the field/column name based on the selection from the drop down list ddlsc1 on the asp page.
I have also been trying the queries in Studio manager to see if maybe I have fat fingered something but it also returns 0 rows so I know something is wrong with the query.
So how can I set the column name field using a parameter for the name. I know this is a huge security concern with SQL injection but this is an internal only site, and more importantly my boss said he wants it done with variables.
I don't really see a problem with this other than you have single quotes around your THEN values. Does this fix it?
SELECT ba.referenceid
,ba.partnum
,pd.width
,pd.length
,CONVERT(VARCHAR(12), pd.dateentered, 101) AS [dateentered]
,ba.partqty
,ba.STATUS
,ba.material
FROM tbl_dlbase ba
JOIN tbl_partdetails pd ON ba.referenceid = pd.referenceid
WHERE CASE #field1
WHEN 'part #'
THEN ba.partnum
WHEN 'Spacing'
THEN pd.spacing
WHEN 'Surface'
THEN pd.surface
WHEN 'Height'
THEN pd.height
WHEN 'Thickness'
THEN pd.thickness
END LIKE '%' + #criteria1 + '%'

teradata : to calulate cast as length of column

I need to use cast function with length of column in teradata.
say I have a table with following data ,
id | name
1|dhawal
2|bhaskar
I need to use cast operation something like
select cast(name as CHAR(<length of column>) from table
how can i do that?
thanks
Dhawal
You have to find the length by looking at the table definition - either manually (show table) or by writing dynamic SQL that queries dbc.ColumnsV.
update
You can find the maximum length of the actual data using
select max(length(cast(... as varchar(<large enough value>))) from TABLE
But if this is for FastExport I think casting as varchar(large-enough-value) and postprocessing to remove the 2-byte length info FastExport includes is a better solution (since exporting a CHAR() will results in a fixed-length output file with lots of spaces in it).
You may know this already, but just in case: Teradata usually recommends switching to TPT instead of the legacy fexp.

MS Access use iif statement select query as alias

I am trying to build a query to get the student results for a specific exam as a table that can be merge to a word document. The following works fine but seems very ineficient since I need to call the same query twice in the same iif statement.
Test1: IIf(Round((SELECT tblMarks.Score FROM tblMarks WHERE tblMarks.Test = 'Test1' AND [tblMarks].[ID] = [tblStudents].ID AND [tblMarks].[Rewrite] = false)*100,0)<70,70,Round((SELECT tblMarks.Score FROM tblMarks WHERE tblMarks.Test = 'Test1' AND [tblMarks].[ID] = [tblStudents].ID AND [tblMarks].[Rewrite] = false)*100,0))
To get rid of the second query call I tried the following but StudentScore is not being recognized by the IIF false condition.
Test1: IIf(Round((SELECT tblMarks.Score AS StudentScore FROM tblMarks WHERE tblMarks.Test = 'Test1' AND [tblMarks].[ID] = [tblStudents].ID AND [tblMarks].[Rewrite] = false)*100,0)<70,70, StudentScore)
I have many of those test field (test2, test3 etc...) so even just removing the extra query per field would probably help speed things up quite a bit.
Does anyone has any idea if what I am trying to do even possible??? Any help appreciated.
Thanks.
UPDATE:
I am trying to create a table/query to be use to merge into an MS Word document with fields. This new query combines many tables into one. Here's and example of the table structure:
tblStudent: StudentID, Name etc... A lot of personal info.
tblScore: StudentID, Test, Score, Rewrite etc...
New Query field are:
DISTINCT tblStudent.StudentID, tblStudent.Name, tblScore.Test(as shown above) AS Test1, tblScore.Test(Same as above but with test2) AS Test2, ... Where CourseName.....
Hope this help people see what I am trying to do; which work fine I am just trying to eliminate the second query in the if statement. Sorry this is the best I can do right now since I am not at work right now and this is where all this stuff is stored.

Bracket-escaped table names with dplyr

I'm programmatically fetching a bunch of datasets, many of them having silly names that begin with numbers and have special characters like minus signs in them. Because none of the datasets are particularly large, and I wanted the benefit R making its best guess about data types, I'm (ab)using dplyr to dump these tables into SQLite.
I am using square brackets to escape the horrible table names, but this doesn't seem to work. For example:
data(iris)
foo.db <- src_sqlite("foo.sqlite3", create = TRUE)
copy_to(foo.db, df=iris, name="[14m3-n4m3]")
This results in the error message:
Error in sqliteSendQuery(conn, statement, bind.data) : error in statement: no such table: 14m3-n4m3
This works if I choose a sensible name. However, due to a variety of reasons, I'd really like to keep the cumbersome names. I am also able to create such a badly-named table directly from sqlite:
sqlite> create table [14m3-n4m3](foo,bar,baz);
sqlite> .tables
14m3-n4m3
Without cracking into things too deeply, this looks like dplyr is handling the square brackets in some way that I cannot figure out. My suspicion is that this is a bug, but I wanted to check here first to make sure I wasn't missing something.
EDIT: I forgot to mention the case where I just pass the janky name directly to dplyr. This errors out as follows:
library(dplyr)
data(iris)
foo.db <- src_sqlite("foo.sqlite3", create = TRUE)
copy_to(foo.db, df=iris, name="14M3-N4M3")
Error in sqliteSendQuery(conn, statement, bind.data) :
error in statement: unrecognized token: "14M3"
This is a bug in dplyr. It's still there in the current github master. As #hadley indicates, he has tried to escape things like table names in dplyr to prevent this issue. The current problem you're having arises from lack of escaping in two functions. Table creation works fine when providing the table name unescaped (and is done with dplyr::db_create_table). However, the insertion of data to the table is done using DBI::dbWriteTable which doesn't support odd table names. If the table name is provided to this function escaped, it fails to find it in the list of tables (the first error you report). If it is provided escaped, then the SQL to do the insertion is not synatactically valid.
The second issue comes when the table is updated. The code to get the field names, this time actually in dplyr, again fails to escape the table name because it uses paste0 rather than build_sql.
I've fixed both errors at a fork of dplyr. I've also put in a pull request to #hadley and made a note on the issue https://github.com/hadley/dplyr/issues/926. In the meantime, if you wanted to you could use devtools::install_github("NikNakk/dplyr", ref = "sqlite-escape") and then revert to the master version once it's been fixed.
Incidentally, the correct SQL-99 way to escape table names (and other identifiers) in SQL is with double quotes (see SQL standard to escape column names?). MS Access uses square brackets, while MySQL defaults to backticks. dplyr uses double quotes, per the standard.
Finally, the proposal from #RichardScriven wouldn't work universally. For example, select is a perfectly valid name in R, but is not a syntactically valid table name in SQL. The same would be true for other reserved words.

Resources