Jaspersoft Studio - Create collection of Strings - collections

Using Jaspersoft Studio 6.4.
I am trying to create a java.util.Collection, with nested type java.lang.String.
I want to populate the collection with the values from my data query: iterate through the values of the Field $F{CostCenter} and add each value to my collection. (My query is a domain query).
I have tried
Creating a collection variable
Incrementing the variable by my CostCenter group
Adding the field value to my variable
<variable name="dls_CCArray" class="java.util.Collection" incrementType="Group" incrementGroup="CCGroup">
<variableExpression><![CDATA[$V{dls_CCArray}.add( $F{costCenterSet.costCenterConcatenated} )]]>
</variableExpression>
</variable>
But my variable is null, even though i know my query is returning cost centers.
Reason I need to do this: I have an optional input control. When i select no cost centers, i still need to pass the list of cost center values returned by the query to my next report through my hyperlink parameter.
Thanks in advance

You can use a second variable to add the value to the collection variable. Also, since the engine might evaluate variable expressions more than once, it would be safer to collect the values in a Set so that you don't end up with duplicate values.
Therefore you could have something like this:
<variable name="Values" class="java.util.Set" calculation="System">
<initialValueExpression>new java.util.HashSet()</initialValueExpression>
</variable>
<variable name="ValueAdd" class="java.lang.Boolean">
<variableExpression>$V{Values}.add($F{costCenterSet.costCenterConcatenated})</variableExpression>
</variable>

Related

Delphi - ClientDataSet SQL calculated field causing "Invalid field type" error at runtime [duplicate]

Using Delphi 10.2, SQLite and Teecharts. My SQLite database has two fields, created with:
CREATE TABLE HistoryRuntime ('DayTime' DateTime, Device1 INTEGER DEFAULT (0));
I access the table using a TFDQuery called qryGrpahRuntime with the following SQL:
SELECT DayTime AS TheDate, Sum(Device1) As DeviceTotal
FROM HistoryRuntime
WHERE (DayTime >= "2017-06-01") and (DayTime <= "2017-06-26")
Group by Date(DayTime)
Using the Field Editor in the Delphi IDE, I can add two persistent fields, getting TheDate as a TDateTimeField and DeviceTotal as a TLargeIntField.
I run this query in a program to create a TeeChart, which I created at design time. As long as the query returns some records, all this works. However, if there are no records for the requested dates, I get an EDatabaseError exception with the message:
qryGrpahRuntime: Type mismatch for field 'DeviceTotal', expecting: LargeInt actual: Widestring
I have done plenty of searching for solutions on the web on how to prevent this error on an empty query, but have had not luck with anything I found. From what I can tell, SQLite defaults to the wide string field when no data is returned. I have tried using CAST in the query and it did not seem to make any difference.
If I remove the persistent fields, the query will open without problems on an empty return set. However, in order to use the TeeChart editor in the IDE, it appears I need persistent fields.
Is there a way I can make this work with persistent fields, or am I going to have to throw out the persistent fields and then add the TeeChart Series at runtime?
This behavior is described in Adjusting FireDAC Mapping chapter of the FireDAC's SQLite manual:
For an expression in a SELECT list, SQLite avoids type name
information. When the result set is not empty, FireDAC uses the value
data types from the first record. When empty, FireDAC describes those
columns as dtWideString. To explicitly specify the column data type,
append ::<type name> to the column alias:
SELECT count(*) as "cnt::INT" FROM mytab
So modify your command e.g. this way (I used BIGINT, but you can use any pseudo data type that maps to a 64-bit signed integer data type and is not auto incrementing, which corresponds to your persistent TLargeIntField field):
SELECT
DayTime AS "TheDate",
Sum(Device1) AS "DeviceTotal::BIGINT"
FROM
HistoryRuntime
WHERE
DayTime BETWEEN {d 2017-06-01} AND {d 2017-06-26}
GROUP BY
Date(DayTime)
P.S. I did a small optimization by using BETWEEN operator (which evaluates the column value only once), and used an escape sequence for date constants (which, in real you replace by parameter, I guess; so just for curiosity).
This data type hinting is parsed by the FDSQLiteTypeName2ADDataType procedure that takes and parses column name in format <column name>::<type name> in its AColName parameter.

How can I get cts:values for elements with special attribute value

I have a document
<document>
<category selected="true">a</category>
<category>b</category>
<category selected="true">c</category>
</document>
How can I get just the values from the category[#selected eq 'true'] ?
I was trying to use next:
cts:element-values(xs:QName("category"), (), (), cts:element-attribute-value-query(xs:QName("category"), xs:QName("selected"), "true"))
but I understand that in this case I will get all the categories.
Your cts:element-attribute-value-query() is matching all documents that have a category element with a selected attribute of true. Then your cts:element-values() returns the distinct values of all of the category elements in each of those documents, regardless of whether the category has a #selected = 'true' attribute.
Presumably you want to get the values out of many, perhaps hundreds of millions of documents, that are similarly structured, and not just this one. For one document, XPath would be fine. Across an entire database, however, you’ll need a range index to do this efficiently. Range indexes, as their name implies, keep an ordered set of values and references to the documents in which they’re found in memory. This makes getting distinct values or calculations across ranges of values very efficient.
With your range index you can use cts:values() to get values straight out of the indexes without having to read the documents themselves. Given your document structure, you’ll need a path range index to differentiate selected categories from unselected ones. Thus you’d create a path range index on category[#selected = 'true'] and then call cts:values(cts:path-reference("category[#selected = 'true']")). cts:values() can also take a cts:query as its fourth parameter to limit the domain of documents over which the values are matched.

DynamoDB data model secondary index search

Folks,
Given we have to store the following shopping cart data:
userID1 ['itemID1','itemID2','itemID3']
userID2 ['itemID3','itemID2','itemID7']
userID3 ['itemID3','itemID2','itemID1']
We need to run the following queries:
Give me all items (which is a list) for a specific user (easy).
Give me all users which have itemID3 (precisely my question).
How would you model this in DynamoDB?
Option 1, only have the Hash key? ie
HashKey(users) cartItems
userID1 ['itemID1','itemID2','itemID3']
userID2 ['itemID3','itemID2','itemID7']
userID3 ['itemID3','itemID2','itemID1']
Option 2, Hash and Range keys?
HashKey(users) RangeKey(cartItems)
userID1 ['itemID1','itemID2','itemID3']
userID2 ['itemID3','itemID2','itemID7']
userID3 ['itemID3','itemID2','itemID1']
But it seems that range keys can only be strings, numbers, or binary...
Should this be solved by having 2 tables? How would you model them?
Thanks!
Rule 1: The range keys in DynamoDB table must be scalar, and that's why the type must be strings, numbers, boolean or binaries. You can't take a list, set, or a map type.
Rule 2: You cannot (currently) create a secondary index off of a nested attribute. From the Improving Data Access with Secondary Indexes in DynamoDB documentation. That means, you can not index the cartItems since it's not a top level JSON attribute. You may need another table for this.
So, the simple answer to your question is another question: how do you use your data?
If you query the users with input item (say itemID3 in your case) infrequently, perhaps a Scan operation with filter expression will work just fine. To model your data, you may use the user id as the HASH key and cartItems as the string set (SS type) attribute. For queries, you need to provide a filter expression for the Scan operation like this:
contains(cartItems, :expectedItem)
and, provide the value itemID3 for the placeholder :expectedItem in parameter valueMap.
If you run many queries like this frequently, perhaps you can create another table taking the item id as the HASH key, and set of users having that item as the string set attribute. In this case, the 2nd query in your question turns out to be the 1st query in the other table.
Be aware of that, you need to maintain the data at two tables for each CRUD action, which may be trivial with DynamoDB Streams.

Teradata: Is it possible to generate an identity column value without creating a record?

In Oracle, I used to use sequences to generate value for a table's unique identifier. In a stored procedure, I'd call sequencename.nextval and assign that value to a variable. After that, I'd use that variable for the procedure's insert statement and the procedure's out param so I could deliver the newly-generated ID to the .NET client.
I'd like to do the same thing with Teradata, but I am thinking the only way to accomplish this is to create a table that holds a value that is sequentially incremented. Ideally, however, I'd really like to be able to acquire the value that will be used for an identity column's next value without actually creating a new record in the database.
No, it is not possible with Teradata because Identify values are cached at either the parsing engine (PE) or AMP level based on the type of operation being performed. My understanding is that the DBC.IdCol table shows the next value that will be use to seed the next batch of IDENTITY values that are needed by the PE or AMP.
Another solution would be to avoid using IDENTITY in this manner for your UPI. You could always use the ROW_NUMBER() window aggregate function partitioned by your logical primary key to seed the next range of values for your surrogate key.

Building Accessories Schema and Bulk Insert

I developed an automation application of a car service. I started accessories module yet but i cant imagine how should I build the datamodel schema.
I've got data of accessories in a text file, line by line (not a cvs or ext.., Because of that, i split theme by substring). Every month, the factory send the data file to the service. It includes the prices, the names, the codes and etc. Every month the prices are updated. I thought the bulkinsert (and i did) was a good choice to take the data to SQL, but it's not a solution to my problem. I dont want duplicate data just for having the new prices. I thought to insert only the prices to another table and build a relation between the Accessories - AccesoriesPrices but sometimes, some new accessories can be added to the list, so i have to check every line of Accessories table. And, the other side, i have to keep the quantity of the accessories, the invoices, etc.
By the way, they send 70,000 lines every month. So, anyone can help me? :)
Thanks.
70,000 lines is not a large file. You'll have to parse this file yourself and issue ordinary insert and update statements based upon the data contained therein. There's no need for using bulk operations for data of this size.
The most common approach to something like this would be to write a simple SQL statement that accepts all of the parameters, then does something like this:
if(exists(select * from YourTable where <exists condition>))
update YourTable set <new values> where <exists condition>
else
insert into YourTable (<columns>) values(<values>)
(Alternatively, you could try rewriting this statement to use the merge T-SQL statement)
Where...
<exists condition> represents whatever you would need to check to see if the item already exists
<new values> is the set of Column = value statements for the columns you want to update
<columns> is the set of columns to insert data into for new items
<values> is the set of values that corresponds to the previous list of columns
You would then loop over each line in your file, parsing the data into parameter values, then running the above SQL statement using those parameters.

Resources