Finding the number of occurances of a distinct value in a column - postgresql-9.1

I am exploring a new table in SQL and was wondering what is the best way find the count of occurrence of each value. In essence I would like to better understand the distribution of values in the column.
At first I did a select Top 10000 for the table and for this particular column I am interested in I get 2-3 differing values. Let's call them A, B, C.
But when I do a select distinct on that column I get 5 million separate values.
What I am wanting to do is know the distribution of the values in the column.
So an example of output from the query I am looking for being:
Distinct Value of Column Count of Occurrence
A A lot
B A lot
C A lot
D 1
E 1
F 1
G 1

What's your looking for is "GROUP BY" :
Exemple :
SELECT category, COUNT(*) FROM CATALOGS GROUP BY category
Will give you the number of element per category.

Related

How do we formulate multiple conditions in a single column in Tableau? Case When or IF Else?

I have one question.. My file contains 3 column.
Column A- Name,
Column B - Transaction count.
Column C - Code
However, I only need to count the 2 codes out of these multiple codes.
I Tried using this formula but i am having incorrect results:
Count(if(ISNULL([name])
AND [Code]<> 'ABA'
AND [Code]<> 'ABC'
Then [Transaction Count]
END)
How do we formulate multiple conditions in a single column?

Is there an R function where I can get the names within a specific column in my dataset

Edit: using the aid from one of the users, I was able to use "table(ArrestData$CHARGE)", yet, since there are over 2400 entries, many of the entries are being omitted. I am looking for the top 5 charges, is there code for this? Additionally, I am looking at a particular council district (which is another variable titled "CITY_COUNCIL_DIST"). I want to see which are the top 5 charges given out within a specific council district. Is there code for this?
Thanks for the help!
Original post follows
Just like how I can use "names(MyData)" to see the names of my variables, I am wondering if I can use a code to see the names/responses/data points of a specific column.
In other words, I am attempting to see the names in my rows for a specific column of data. I would like to see what names are cumulatively being used.
After I find this, I would like to know how many times each name within the rows is being used, whether thats numeric or percentage. After this, I would like to see how many times each name within the rows is being used with the condition that it meets a numeric value of another column/variable.
Apologies if this, in any way, is confusing.
To go further in depth, I am playing around with the Los Angeles Police Data that I got via the Office of the Mayor's website. From 2017-2018, I am attempting to see what charges and the amount of each specific charge were given out in Council District 5. CHARGE and CITY_COUNCIL_DIST are the two variables I am looking at.
Any and all help will be appreciated.
To get all the distinct variables, you can use the unique function, as in:
> x <- c(1,1,2,3,3,4,5,5,5,6)
> unique(x)
[1] 1 2 3 4 5 6
To count the number of distinct values you can use table, as in:
> x <- c(1,1,2,3,3,4,5,5,5,6)
> table(x)
x
1 2 3 4 5 6
2 1 2 1 3 1
The first row gives you the distinct values and the second row the counts for each of them.
EDIT
This edit is aimed to answer your second question following with my previous example.
In order to look for the top five most repeated values of a variable we can use base R. To do so, I would first create a dataframe from your table of frequencies:
df <- as.data.frame(table(x))
Having this, now you just have to order the column Freq in descending order:
df[order(-df$Freq),]
In order to look for the top five most repeated values of a variable within a group, however, we need to go beyond base R. I would use dplyr to create an augmented dataframe with frequencies for each value of the variable of interest, let it be count_variable:
library(dplyr)
x_or <- x %>%
group_by(group_variable, count_variable) %>%
summarise(freq=n())
where x is your original dataframe, group_variable is the variable for your groups and count_variable is the variable you want to count. Now, you just have to order the object in a way you get the frequencies of your count_variable ordered by group_variables:
x_or %>%
arrange(group_variable, count_variable, freq)

How to Count number of occurences of values in all columns

I am able to find out the Count number of occurrences of values in a single column.
By using
select column_name,count(count_name)
from table_name order by column_name
But I want a query for no of occurrences of multiple column values.
The count function, when used directly on a column, just returns a count of the rows. The sum of the counts over multiple columns is just the amount of rows times the amount of columns. One thing we could do is to return the sum of decodes of the condition over all columns, e.g.:
select mytable.*,
DECODE(mytable.column1,"target value",1,0) + DECODE(mytable.column2,"target
value",1,0) as hits from mytable
Basically what that does, is for each row, it will check the amount of columns that meet the condition. In this case, that value ('hits') can be 0, 1 or 2 because we are checking the condition over 2 columns.

Count of columns with filters

I have a dataframe with multiple columns and I want to apply different functions on each column.
An example of my dataset -
I want to calculate the count of column pq110a for each country mentioned in qcountry2 column(me-mexico,br-brazil,ar-argentina). The problem I face here is that I have to use filter on these columns for example for sample patients I want-
Count of pq110 when the values are 1 and 2 (for some patients)
Count of pq110 when the value is 3 (for another patients)
Similarly when the value is 6.
For total patient I want-total count of pq110.
Output I am expecting is-Output
Similalry for each country I want this output.
Please suggest how can I do this for other columns also,countrywise.
Thanks !!
I guess what you want to do is count the number of columns of 'pq110' which have the same value within different 'qcountry2'.
So I'll try to use 'tapply' to divide data into several subsets and then use 'table' to count column number for each different value.
tapply(my_data[,"pq110"], INDEX = as.factor(my_data[,"qcountry2"]), function(x)table(x))

trouble with contingency table in R

I looked everywhere but did not find answer to my question. I am having trouble with makig contingency table. I have data with many columns, let say 1, 2 and 3. In the first column there are let say 100 different values, in the second 20 and the third column has 2 possible values: 0 and 1. First I take just data with value 1 in column 3 (data<-data[Column3==1,]). Now I have only around 20 different values in 1. column and 5 in 2. column. However when I do a contingency table its size is 100x20, not 20x5, and contains a lot of zeros (they correspond to combination of column1 and column2 which has value 0 in column3). I would be greatful for every kind of help, thanks.
I guess all your three variables are factors.So convert them into character using
as.character()
to all three variables then apply
table()
for that.

Resources