I'm creating a report in BI Publisher using the BI Publisher Desktop tool for Word.
What I need is to have a table with a dynamic column number.
Let's imagine I'm listing stocks by store: Each line is an item and I need to have a column for each store in the database, but that must be dynamic because a store can be created or deleted at any moment.
The number of stores, i.e., the number of columns that need to exist is obtained from an SQL query that goes into the report by a data set.
The query will be something like SELECT COUNT(*) AS STORE_COUNT FROM STORE; in a data set named G_1, so the number of columns is the variable G_1::STORE_COUNT.
Is there any way that can be achieved?
I'm developing the report using an .rtf file, so any help related would be appretiated.
Thank you very much.
Create a .rtf file with the column names mapped to a .xdo or .xdm file. The mapped column in .xdo or .xdm file should be in the cursor or the select statement of your stored procedure of function.
Related
I have an excel spreadsheet with multiple entries that I want to insert into an SQLite DB from UIPath. How do I do this?
You could do it one of two ways. For both methods, you will need to use the Excel Read Range to put the excel into a table.
Scenario 1: You could read the table in a for each loop, line by line, converting each row to SQL and use a Execute non-query activity. This is too long, and if you like O notation, this is an O(n) solution.
Scenario 2: You could upload the entire table (as long as its compatible with the DB Table) to the database.
you will need Database > Insert activity
You will need to provide the DB Connection (which I answer in another post how to create)
Then enter the sqlite database table you want to insert into in Quotes
And then enter the table name that you have created or pulled from another resource in the last field
Output will be an integer (Affected Records)
For O Notation, this is an O(1) solution. At least from our coding perspective
I have:
<cfspreadsheet action="read" src="#Trim(PathToExcelFile)#" query="Data">
How do I count the total column in my "Data" query using ColdFusion Query of Query? I need to count whether my users has used the corrent excel file format before inserting into my DB.
I'm using Oracle 11g and I can not do:
Select * From Data Where rownum < 2
If I can do that then I can create an array and count the columns but running that script using results in error. The error saying that there is no column name Rownum. Oracle does not allow me to use select top 1.
I don't want to loop over 5000+ record to just count the total column of one row. I appreciate any help, thank you
ColdFusion adds a few additional variables to it's query results. One of them is named `columnList' and contains a comma-separated list of the query columns that were returned.
From the documentation here
From that you should be able to count the number of columns easily. #listlen(Data.columnList)# as one example.
I have created a report file in my ASP application, Statistics.rdlc
I have created a Data Source which connects to my local database.
I now wish to add a Dataset using a specific query I have written, however when I right click Datasets in the Report Data panel and select my Data source I am presented with a list of the tables on the database under 'Available datasets'.
What I am expecting to see here is the Dataset1.xsd I created which contains the following:
That Query contains the SQL I wish to apply to my report table, can any point out what im doing wrong here?
I needed to create a table adapter not a query.
This post helped me out:
http://www.c-sharpcorner.com/UploadFile/a72401/rdlc-report-generation-using-dataset/
Sometimes report view doens't allow when you use "SELECT * FROM". Put all columns instead of *.
Ex: SELECT column1, column2, column3 FROM table
We get new data for our database from an online form that outputs as an Excel sheet. To normalize the data for the database, I want to combine multiple columns into one row.
Example, I want data like this:
ID | Home Phone | Cell Phone | Work Phone
1 .... 555-1234 ...... 555-3737 ... 555-3837
To become this:
PhoneID | ID | Phone Number | Phone type
1 ............ 1 ....... 555-1234 ....... Home
2 ............ 1 ....... 555-3737 ....... Cell
3 ............ 1 ....... 555-3837 ...... Work
To import the data, I have a button that finds the spreadsheet and then runs a bunch of queries to add the data.
How can I write a query to append this data to the end of an existing table without ending up with duplicate records? The data pulled from the website is all stored and archived in an Excel sheet that will be updated without removing the old data (we don't want to lose this extra backup), so with each import, I need it to disregard all of the previously entered data.
I was able to make a query that lists everything out in the correct from the original spreadsheet (I entered the external spreadsheet into an unnormalized table in Access to test it) but when I try to append it to the phone number table, it adds all of the data repeatedly. I can remove it with a query to remove duplicate data, but I'd rather not leave it like that.
There are several possible approaches to this problem; which one you choose may depend on the size of the dataset relative to the number of updates being processed. Basically, the choices are:
1) Add a unique index to the destination table, so that Access will refuse to add a duplicate record. You'll need to handle the possible warning ("Access was unable to add xxx records due to index violations" or similar).
2) Import the incoming data to a staging table, then outer join the staging table to the destination table and append only records where the key field(s) in the destination table are null (i.e., there's no matching record in the destination table).
I have used both approaches in the past - I like the index approach for its simplicity, and I like the staging approach for its flexibility, because you can do a lot with the incoming data before you append it if you need to.
You could run a delete query on the table where you store the queried data and then run your imports.
Assuming that the data is only being updated.
The delete query will remove all records and then you can run the import to repopulate the table - therefore no duplicates.
I developed an automation application of a car service. I started accessories module yet but i cant imagine how should I build the datamodel schema.
I've got data of accessories in a text file, line by line (not a cvs or ext.., Because of that, i split theme by substring). Every month, the factory send the data file to the service. It includes the prices, the names, the codes and etc. Every month the prices are updated. I thought the bulkinsert (and i did) was a good choice to take the data to SQL, but it's not a solution to my problem. I dont want duplicate data just for having the new prices. I thought to insert only the prices to another table and build a relation between the Accessories - AccesoriesPrices but sometimes, some new accessories can be added to the list, so i have to check every line of Accessories table. And, the other side, i have to keep the quantity of the accessories, the invoices, etc.
By the way, they send 70,000 lines every month. So, anyone can help me? :)
Thanks.
70,000 lines is not a large file. You'll have to parse this file yourself and issue ordinary insert and update statements based upon the data contained therein. There's no need for using bulk operations for data of this size.
The most common approach to something like this would be to write a simple SQL statement that accepts all of the parameters, then does something like this:
if(exists(select * from YourTable where <exists condition>))
update YourTable set <new values> where <exists condition>
else
insert into YourTable (<columns>) values(<values>)
(Alternatively, you could try rewriting this statement to use the merge T-SQL statement)
Where...
<exists condition> represents whatever you would need to check to see if the item already exists
<new values> is the set of Column = value statements for the columns you want to update
<columns> is the set of columns to insert data into for new items
<values> is the set of values that corresponds to the previous list of columns
You would then loop over each line in your file, parsing the data into parameter values, then running the above SQL statement using those parameters.