I have data looks like this
Sum Of a | Sum Of b | Sum Of C | Sum Of d
100 | 200 | 300 | 400
In order to create a pie chart I need to change it to format something like this
Sum Of | Value
a | x
c | y
d | z
Question how can I create new table that from the first table by query or any suggestion?
I was able to find this link that I found useful on creating chats from ms-access
Here we see that the trick is to create a saved UNION query named [SometingDataForPieChart]...
SELECT "DONE" AS PieCategory, [DONE] AS PieValue, [AREA] FROM [TABLE]
UNION ALL
SELECT "REMAIN" AS PieCategory, [REMAIN] AS PieValue, [AREA] FROM [TABLE]
...returning...
PieCategory | PieValue | AREA
----------- | -------- | -----
DONE | 100 | AREA1
DONE | 200 | AREA2
DONE | 200 | AREA3
REMAIN | 200 | AREA1
REMAIN | 300 | AREA2
REMAIN | 700 | AREA3
...and this is how you start to do pie chart.
Please read How to add a pie chart to my Access report as it has many images and step by steps.
Credit to: Gord Thompson for this answer
Related
I need to do some cumulative plots in R, but I really don't know what to use. I have data like the one below.
I want to do some graphs, like shown in the images (below the links). The first showing me that for example 80% of the stops happen when Q is X value. The second one starting from the exceeded value (1mg/l), show the accumulation of stops over time. And the third showing the accumulation of stops over time.
+---------------------------------------------------------+
| Date | Stops | Q (m3/s) | Concentration (mg/L) |
+---------------------------------------------------------+
| 1/01/2009 | no | 100 | 0,5 |
| 2/01/2009 | no | 98 | --- |
| 3/01/2009 | no | 80 | --- |
| 4/01/2009 | yes | 65 | 1,2 |
| 5/01/2009 | yes | 60 | --- |
| 6/01/2009 | yes | 67 | --- |
| 7/01/2009 | no | 75 | 0,6 |
| 8/01/2009 | no | 70 | --- |
| 9/01/2009 | no | 72 | --- |
| 10/01/2009| yes | 60 | 1,0 |
| 11/01/2009| yes | 63 | --- |
+---------------------------------------------------------+
[%stops and discharge][1] [cumulative stops with concentration][2] [cumulative stops over time][3]
The data i'm using is bigger off course, is of 10 years.
After doing the plots I would also like to find the proportion of time where a stop happened with low discharge, or with exceeded concentrations. For example, in the 10 year period, 10 months represent stops.
I'm also looking at the relation of the stops with the other variables, but I'm not sure which test is best for that. I'm planning to use Pearson for the relation of discharge with concentration, although I'm not sure if the discontinuous data of concentration is a problem. For the relation of Stops with concentration and discharge, I'm planning Spearman rank, but again, I'm not sure if its alright with categorical variables(stops) and the discontinuous data (concentration). What do you think is the best option for relating this variables?
[1]: https://i.stack.imgur.com/hYdkD.png [2]: https://i.stack.imgur.com/N0qNW.png [3]: https://i.stack.imgur.com/0nSrF.png
Thanks you for your help!
Maybe this is related to math.stacexhange, but I am affraid, that I will get a formula in answer what I won't undersand.
I have products in our database, and I have products from different suppliers in another table.
What I want is to pair, these supplieres products to our products if it is possible, or show for me at least show me a list, where the matching is high.
I did iterate throught all the suppliers products, and explodes the product name by spaces, and store it in a table, and the count of the occurence.
The table seems like this.
+--------+-------------+---------------+-------+
| id | word | originalWord | count |
+--------+-------------+---------------+-------+
| 220950 | Tracer | Tracer | 493 |
| 220951 | Destroyer | Destroyer | 3 |
| 220952 | Avago5050 | Avago5050 | 4 |
| 220953 | mouse | mouse | 2535 |
| 220954 | TRAMYS44916 | /TRAMYS44916/ | 2 |
| 220955 | GameZone | GameZone | 16 |
| 220956 | Enduro | Enduro | 3 |
| 220957 | AVAGO | AVAGO | 10 |
| 220958 | 5050 | 5050 | 4 |
| 220959 | optical | optical | 2370 |
| 220960 | USB | USB | 6160 |
+--------+-------------+---------------+-------+
and so on. Of course, in another table I stored, what is the product id for each word.
So what I want is to determine the weight of a word by occurence.
As you see, the word TRAMYS44916 is occured only twice, almost certain that is a partnumber, so this is the most heavy word. It weight should be 1.
Let's say the most occured is USB with 6160 occurence, so it weight should be like 0.01 or something like that, I think.
What is the best way to get all the weights of the words?
There are other tables for other suppliers so dispersion is always change.
This reminds me of Naive Bayes text classification, so to determine which product should it belongs to, you can calculate tf-idf of all the words.
Then if you want to pair it from another product name, you can decompose it to words again and select the product id based on the highest term value, however maybe you should specify some threshold for this, because in some cases it would not be that clear.
tf-idf = ("number of word matches in product name"/"word count of product name") * log ("number of products" / "number of products that contains the word")
You can see how it is done in the example here (In your case the document will be the product full name): https://en.wikipedia.org/wiki/Tf–idf#Example_of_tf.E2.80.93idf
Example implementation in Java: https://guendouz.wordpress.com/2015/02/17/implementation-of-tf-idf-in-java/
I'm required for designing a survey system for our customer.
It's based on asp.net, and the database used is oracle.
I've no experience here so I'd like to ask for advice about:
What database schema to use for storing user answers, I'm afraid my current design is likely to have performance issue...
About the survey:
There'll be two or more surveys going on at the same time.
Surveys may be triggered once a year or more frequently, so I think I need a Survey Period table.
Surveys are targeting different products, so there'll be a mapping between products and surveys
Currently my design:
Survey Category table
+------------+--------------+
| CatageryId | CatageryName |
+------------+--------------+
| 1 | cat1 |
| 2 | cat2 |
+------------+--------------+
Survey Category version table
+-----------+------------+--------------------+
| VersionId | CatageryId | VersionDescription |
+-----------+------------+--------------------+
| 1 | 1 | 'cat1 version1' |
| 2 | 1 | 'cat1 version2' |
| 3 | 2 | 'cat2 version1' |
+-----------+------------+--------------------+
Survey Period Table
+----------+--------------------+
| PeriodId | PeriodDescription |
+----------+--------------------+
| 1 | 'cat1 period2016' |
| 2 | 'cat1 period2017' |
| 3 | 'cat2 period2016' |
+----------+--------------------+
Survey Period-Version map table
+----------+-----------+
| PeriodId | VersionId |
+----------+-----------+
| 1 | 1 |
| 1 | 2 |
| 2 | 1 |
| 3 | 3 |
+----------+-----------+
A Version-Question map table
+--------------+------------+
| VersionId | | QuestionId |
+--------------+------------+
| 1 | 1 |
| 1 | 2 |
| 1 | 3 |
| 2 | 1 |
| 2 | 2 |
| 3 | 1 |
+--------------+------------+
A Version-Product map table
+-----------+-----------+
| VersionId | ProductId |
+-----------+-----------+
| 1 | 'prodA' |
| 1 | 'prodB' |
| 1 | 'prodC' |
| 2 | 'prodA' |
+-----------+-----------+
And to Store the survey result data, I have to put lots of duplicated information between rows of record:
User Answer table
+----------+------------+----------+-----------+-----------+--------+-----------+
| AnswerId | QuestionId | PeriodId | UserId/Ip | ProductId | Answer | VersionId |
+----------+------------+----------+-----------+-----------+--------+-----------+
| 1 | 1 | 1 | 'adam' | 'prodA' | 'Yes' | 2 |
| 2 | 2 | 1 | 'Joe' | 'prodA' | 'Yes' | 2 |
| 3 | 1 | 2 | 'adam' | 'prodB' | 'A' | 3 |
+----------+------------+----------+-----------+-----------+--------+-----------+
We're expecting tens of products and thousands of users for this system.
So assume 30 products, 5000 users, 50 questions per survey and 4 surveys per year
in the current design, there'll be 5000 * 4 * 50 * 30 = 30 millions of records added in the User Answer Table per year,
I'm really afraid if it could still work properly..., so any suggestions for optimizing?
Edit 1:
Add VersionId column in user answer table as suggested.
This looks like a case of premature optimization. You should probably worry more about correctness and flexibility than performance.
30 million rows per year, especially in these skinny tables, is a small amount of data for any Oracle system. Don't worry too much about indexes and partitioning yet, those can be added later if necessary.
Your solution is similar to the Entity Attribute Value (EAV) model. It's worth knowing that term since much has been written about it. There are 2 common problems with EAV models you want to avoid:
Avoid extremes. Don't use EAV for everything, but don't completely avoid it either. EAV is slow and inconvenient compared to a normal table structure. It should not be used for every interesting columns, otherwise you have created a database within a database. For example, if virtually every survey has fields like a username and a date created, store those as regular columns and not in a generic column. It's OK to have a column that is only populated 99% of the time. On the other hand, it's a bad idea to always avoid the EAV and try to hack something together with 1,000 column tables or object-relational types.
Always use the correct type. Always, always, always store data as the correct type. Store numbers as numbers, dates as dates, and strings as strings. Your queries will be easier, faster, and safer, if you have at least three columns for the data: ANSWER_NUMBER, ANSWER_STRING, ANSWER_DATE. I explain the type safety problem more in this answer. Those extra columns may look bad in the model diagram, but they are a life-saver when you're querying the data.
I have two tables
Names
id | name
---------
5 | bill
15 | bob
10 | nancy
Entries
id | name_id | added | description
----------------------------------
2 | 5 | 20140908 | i added this
4 | 5 | 20140910 | added later on
9 | 10 | 20140908 | i also added this
1 | 15 | 20140805 | added early on
6 | 5 | 20141015 | late to the party
I'd like to order Names by the first of the numerically-lowest added values in the Entries table, and display the rows from both tables ordered by the added column overall, so the results will be something like:
names.id | names.name | entries.added | entries.description
-----------------------------------------------------------
15 | bob | 20140805 | added early on
5 | bill | 20140908 | i added this
10 | nancy | 20140908 | i also added this
I looked into joins on the first item (e.g. SQL Server: How to Join to first row) but wasn't able to get it to work.
Any tips?
Give this query a try:
SELECT Names.id, Names.name, Entries.added, Entries.description
FROM Names
INNER JOIN Entries
ON Names.id = Entries.name_id
ORDER BY Entries.added
Add DESC if you want it in reverse order i.e.: ORDER BY Entries.added DESC.
This should do it:
SELECT n.id, n.name, e.added, e.description
FROM Names n INNER JOIN
(SELECT name_id, description, Min(added) FROM Entries GROUP BY name_id, description) e
ON n.id = e.name_id
ORDER BY e.added
I am wondering if there is simple way to achieve this in Julia besides iterating over the rows in a for-loop.
I have a table with two columns that looks like this:
| Name | Interest |
|------|----------|
| AJ | Football |
| CJ | Running |
| AJ | Running |
| CC | Baseball |
| CC | Football |
| KD | Cricket |
...
I'd like to create a table where each Name in first column is matched with a combined Interest column as follows:
| Name | Interest |
|------|----------------------|
| AJ | Football, Running |
| CJ | Running |
| CC | Baseball, Football |
| KD | Cricket |
...
How do I achieve this?
UPDATE: OK, so after trying a few things including print_joint and grpby, I realized that the easiest way to do this would be by() function. I'm 99% there.
by(myTable, :Name, df->DataFrame(Interest = string(df[:Interest])))
This gives me my :Interest column as "UTF8String[\"Running\"]", and I can't figure out which method I should use instead of string() (or where to typecast) to get the desired ASCIIString output.