I am creating a report and have a field that has multiple values representing different data values. i.e 4-Completeness 5-accuracy etc... What I need to do is make multiple columns where that field is filtered down to one value. The problem is I get the error if I try and edit the query item in the report of 'Boolen value expression as query item is not supported' How do I fix?
example:
ID column | Data Value = 4 | Actual Data | Data Value = 5
EDIT:
I currently have a case when [Data value] = 4 then [percentage] for the different columns but I am still getting wrong output. I am getting
ID1 | 45% | | |
ID1 | | 35% | |
ID1 | | | 67% |
I need all of ID1 to be in one row.
You can fix this by totaling by ID which will combine all three rows in your example to one:
total([Measure] for [ID])
Change each of the three percentage columns to use this expression, substituting their respective data item for [Measure].
Normally, you don't want to total percentages, but this is an exception. Since only one row has actual data, the total will match that row and the other two null values will not be included in the total.
Simple way would be to do it for each data value in three queries and join them on ID1
Related
Basic requirements:
I have a table with a bunch of attributes (20-30), but only 3 are used in querying: User, Category, and Date, and would be structured something like this...
User | Category | Date | ...
1 | Red | 5/15
1 | Green | 5/15
1 | Red | 5/16
1 | Green | 5/16
2 | Red | 5/18
2 | Green | 5/18
I want to be able to query this table in the following 2 ways:
Most recent rows (based on Date) by User. e.g., User=1 returns the 2 rows from 5/16 (3rd and 4th row)
Most recent rows (based on Date) by User and Category. e.g., User=1, Category=Red returns the 5/16 row only (the 3rd row).
Is the best way to model this with a HASH on User, RANGE on Date, and a GSI with HASH on User+Category and RANGE on Date? Is there anything else that might be more efficient? If that's the path of least resistance, I'd still need to know how many rows to return, which would require doing a count against distinct categories or something?
I've decided that it's going to be easier to just change the way I'm storing the documents. I'll move the category and other attributes into a sub-document so I can easily query against User+Date and I'll do any User+Category+Date querying with some client-side code against the User+Date result set.
I have two tables that are linked via a relation (edit -> data table properties -> relations). One contains some raw data, and the other contains aggregated data (calculation on the value).
You can see some examples below. Here, data are linked on "category" column.
RAW DATA
category | id | value
---------+----+------
A | 1 | 10
A | 2 | 20
A | 3 | 30
A | 4 | 30
B | 1 | 20
B | 2 | 20
COMPUTED DATA
category | any_calculation //aggregation of raw data based on category
---------+----------------
A | 10
B | 20
To do the calculation, I use a R/TERR function that take raw data as an input, and that output computed data.
Then I display raw data in a scatter plot (one per category), and I add a curve that is taken from the column "any_calculation" of the computed data.
My main problem is that my table with computed data isn't filled by the R/TERR script. The cause is, in my opinion, the cyclic dependency between those two tables.
Do you have any idea/workaround/fix ?
I should also add that I can't do the calculation in the scatter plot (huge calculation). I use Spotfire 7.8.0.
It seems like a table can't be modified/edited by different sources, that is to say multiple scripts (R and Python) can't have the same table as an output.
To fix my problem, I created a new table in one of my script. Then I created a relation between this table and the other one from the other script.
I have a datatable like below one which is not fixed it can contain n number of columns. I need to compare the column values based on the column name and update the other row value
E.g
dtFinYearValues
dtColumnName | 2017AU | 2017CN | 2018AU | 2018CN | 2019CN | 2020CN
--------------------------------------------------------------------
Value | -1234 | -500 | -300 | 1000 | 1000 | -500
LatestValue | -1234 | -500 | -300 | 500 | 1000 | -500
LatestValue of 2018CN --> Sum of 2017CN Value (-500) and 2018CN Value(1000).
For the above datatable i need to compare the column name and update the value accordingly
Conditions :
1) If the value is -ve update the LatestValue with Same Value.
2) If the value is +ve check whether any -ve value exists for previous fin-year of same country like (In the above datatable 2018CN Value is +ve but 2017CN value is negative so the sum of 2017CN and 2018CN has to be updated for 2018CN-->Latest Value.)
I can't hard-code the column number as there can be different country fin year combination, i need to compare the value of one country with the same country only.
How can i code this in vb.net?
Can you bring back the data from the db another way? If so look at using SQL unpivot (the unpivot could also be done in your code using your original dataset but you would have to manually code this). This will give you 3 columns (name, value, latestvalue). This will be easier to process. Updates can still be done using your original dataset.
I have a database that is the output for a python script involving a basic game. When the code saves to the database, it saves it to a table called points with the data: name, account_name, time, score. What I want is for the data be saved into a second table when sorted by name, I will then do the same with account_name. Some of the points table:
name |account_name | time | score
oliver |Oliver | 10:29:14-01:04:2017 | 250
oliver |Oliver | 10:29:20-01:04:2017 | 500
dave |Oliver | 10:29:34-01:04:2017 | 250
What I want is for the data to be sorted into a table called name, where the score is totalled for all records with the same name and a column keeps track of how many entries have been merged(In this case, it will be equal to number of games played). For example:
name | totalpoints | totalgames
oliver| 750 | 2
dave | 250 | 1
I will use this format to do the same with account_name. I have found information on how to group and sum the data but not into a second table. Thank you in advance.
first, create your table by:
CREATE TABLE `stats` (
`name` TEXT PRIMARY KEY ON CONFLICT REPLACE,
`totalpoints` INTEGER,
`totalgames` INTEGER
);
then insert into your table with:
INSERT INTO stats
SELECT games.name, SUM(games.score) AS totalpoints, COUNT(*) AS totalgames
FROM games
GROUP BY games.name
I want to merge two different csv files into one depending on one column. Both csv datasets have one column with the same description (name). Now I want to copy the content of two columns (POINT_X and POINT_Y) from Table B to Table A depending on the NAME column.
Every row of Table A with the name "TestTestTest" should have the corresponding values of Table B with the name "TestTestTest.
TableA
FID | NAME| job | school | superma | traffic | fun | shopping |
TableB
FID | NAME| pop | POINT_X | POINT_Y | POINT_Z
I've already tried to use the merge function.
newdata = merge(TableA, TableB, all="TRUE")
write.csv(newdata, file = "merge.csv")
This works somehow, but it writes a strange new .csv with many columns, which I don't want. I just want to add only the columns "POINT_X", "POINT_Y" to TableA depending on the column "NAME"
Thanks!
You could still use merge, but pass tableB limited to the columns NAME, POINT_X, and POINT_Y:
newdata = merge(TableA, TableB[,c("NAME", "POINT_X", "POINT_Y")], all=TRUE)
write.csv(newdata, file = "merge.csv")
merge is still the best way and you can add by parameter
newdata = merge(TableA, TableB, by="NAME",all="TRUE")
However, it can also be achieved as
TableA$POINT_X<-TableB[match(TableB$NAME, TableA$NAME),"POINT_X"]
TableA$POINT_Y<-TableB[match(TableB$NAME, TableA$NAME),"POINT_Y"]