I have this problem, I need to calculate to which table a number belongs.
For example, I need to determine to which table the number 18 belongs.
+---+---+ +---+---+ +---+---+ +---+---+ +---+---+
| 1 | 2 | | 5 | 6 | | 9 |10 | |13 |14 | |17 |18 |
+---+---+ +---+---+ +---+---+ +---+---+ +---+---+
| 3 | 4 | | 7 | 8 | | 11| 12| |15 |16 | |19 |20 |
+---+---+ +---+---+ +---+---+ +---+---+ +---+---+
In the example above, the number 18 belongs to the 5th table. How can I calculate the table any number belongs to?, knowing each table can contain only 4 numbers?.
Divide by the size of each table and round up:
table_no = ceil(number / 4)
Take precaution with integer division in some languages. The same result without conversion to float, using integer division:
table_no = (number - 1) / 4 + 1
Related
I have the following data:
+---------+--------+----------+------+-------+--------+-----------+
| xType | xAccID | xAccName | xCat | xYear | xMonth | xRaseed |
+---------+--------+----------+------+-------+--------+-----------+
| Amounts | 52 | Acc1 | Rs | 2020 | 11 | 3144.83 |
| Amounts | 52 | Acc1 | Rs | 2020 | 12 | -15199.64 |
| Amounts | 53 | Acc2 | Cus | 2020 | 12 | 5306.04 |
| Amounts | 53 | Acc2 | Cus | 2020 | 11 | 1090.64 |
+---------+--------+----------+------+-------+--------+-----------+
actually, I want to sum the (xRaseed) in the current row with the (xRaseed) in the previous row For each (xAccID) separately
the result that I want:
+---------+--------+----------+------+-------+--------+--------------------------------+
| xType | xAccID | xAccName | xCat | xYear | xMonth | xRaseed |
+---------+--------+----------+------+-------+--------+--------------------------------+
| Amounts | 52 | Acc1 | Rs | 2020 | 11 | 3144.83 |
| Amounts | 52 | Acc1 | Rs | 2020 | 12 | Not -15199.64 But (-12,054.81) |
| Amounts | 53 | Acc2 | Cus | 2020 | 12 | 5306.04 |
| Amounts | 53 | Acc2 | Cus | 2020 | 11 | Not 1090.64 But (6,396.68) |
+---------+--------+----------+------+-------+--------+--------------------------------+
I applied the following solution that I got from somebody here:
select t.*,
sum(xRaseed) over (partition by xAccID order by xYear, xMonth) as running_xRaseed
from t;
but everything was working in the local server but when I applied the solution on my hosting, didn't work?? in the local I use (xampp - 10.4.17-MariaDB), and in my hosting, I use (MySQL 5.7.23-23), what's the problem, please?
Here is a db<>fiddle
On versions of MySQL earlier than 8+, we can use a correlated subquery to find the rolling sum:
SELECT xType, xAccID, xAccName, xCat, xYear, xMonth,
(SELECT SUM(t2.xRaseed) FROM yourTable t2
WHERE t2.xAccID = t1.xAccID AND
(t2.xYear < t1.xYear OR
t2.xYear = t1.xYear AND t2.xMonth <= t1.xMonth)) AS xRaseed
FROM yourTable t1
ORDER BY
xAccId,
xYear,
xMonth;
I'm using SQLite browser, I'm trying to find a query that can update my table from:
Table is called main
| |time_one |time_two|
| 1| 2016-08-21 07:01:04| |
| 2| 2016-08-21 08:01:03| |
| 3| 2016-08-17 09:11:54| |
| 4| 2016-08-18 11:01:59| |
| 5| 2016-08-19 12:01:04| |
| 6| 2016-08-20 01:01:04| |
The result I'm looking for is the minute, second and millisecond:
| |time_one |time_two|
| 1| 2016-08-21 07:01:04| 01:04|
| 2| 2016-08-21 08:01:03| 01:03|
| 3| 2016-08-17 09:11:54| 11:54|
| 4| 2016-08-18 11:01:59| 01:59|
| 5| 2016-08-19 12:01:04| 01:04|
| 6| 2016-08-20 01:01:04| 01:04|
This is the function I used to show the time is
Select strftime('%M:%f', time_1)
From main
;
Now I just want to update time_2 with what is shown from the above command so I can group the ones with the same minutes and seconds together.
Just put that computation into the UPDATE statement:
UPDATE main SET time_2 = strftime('%M:%f', time_1);
I have a table definied in org-mode:
`#+RESULTS[4fc5d440d2954e8355d32d8004cab567f9918a64]: table
| 7.4159 | 3.0522 | 5.9452 |
| -1.0548 | 12.574 | -6.5001 |
| 7.4159 | 3.0522 | 5.9452 |
| 5.1884 | 4.9813 | 4.9813 |
`
and I want to produce the following table:
#+caption: Caption of my table
| | group 1 | group 2 | group 3 |
|--------+---------+---------+---------|
| plan 1 | 7.416 | 3.052 | 5.945 |
| plan 2 | -1.055 | 12.574 | -6.5 |
| plan 3 | 7.416 | 3.052 | 5.945 |
| plan 4 | 5.1884 | 4.9813 | 4.9813 |
How can I accomplish that? Here is what I tried (in R):
`
#+begin_src R :colnames yes :var table=table :session
data.frame(table)
#+end_src
`
But of course it doesn't work, here is what I get:
`#RESULTS:
| X7.4159 | X3.0522 | X5.9452 |
|---------+---------+---------|
| -1.0548 | 12.574 | -6.5001 |
| 7.4159 | 3.0522 | 5.9452 |
| 5.1884 | 4.9813 | 4.9813 |`
Any suggestions?
thanks!
This gets pretty close. first define this function:
#+BEGIN_SRC emacs-lisp
(defun add-caption (caption)
(concat (format "org\n#+caption: %s" caption)))
#+END_SRC
Next, use this kind of src block. I use python, but it should work in R too, you just need the :wrap. I passed your data in through the var, you don't need it if you generate the data in the block.
#+BEGIN_SRC python :results value :var data=data :wrap (add-caption "Some really long, uninteresting, caption about data that is in this table.")
data.insert(0, ["", "group 1", "group 2", "group 3"])
data.insert(1, None)
return data
#+END_SRC
This outputs
#+BEGIN_org
#+caption: Some really long, uninteresting, caption about data that is in this table.
| | group 1 | group 2 | group 3 |
|--------+---------+---------+---------|
| plan 1 | 7.416 | 3.052 | 5.945 |
| plan 2 | -1.055 | 12.574 | -6.5 |
| plan 3 | 7.416 | 3.052 | 5.945 |
| plan 4 | 5.1884 | 4.9813 | 4.9813 |
#+END_org
and it exports ok too I think.
I have the query:
SELECT count(*)
FROM
(
SELECT
TBELENCO.DATA_PROC, TBELENCO.POD, TBELENCO.DESCRIZIONE, TBELENCO.ERROR, TBELENCO.STATO,
TBELENCO.SEZIONE, TBELENCO.NOME_FILE, TBELENCO.ID_CARICAMENTO, TBELENCO.ESITO_OPERAZIONE,
TBELENCO.DES_TIPO_MISURA,
--TBELENCO.RAGIONE_SOCIALE,
--ROW_NUMBER() OVER (ORDER BY TBELENCO.DATA_PROC DESC) R
ROWNUM R
FROM(
SELECT
LOG.DATA_PROC, LOG.POD, LOG.DESCRIZIONE, LOG.ERROR, LOG.STATO,
LOG.SEZIONE, LOG.NOME_FILE, LOG.ID_CARICAMENTO, LOG.ESITO_OPERAZIONE, TM.DES_TIPO_MISURA
--,C.RAGIONE_SOCIALE
--ROW_NUMBER() OVER (ORDER BY LOG.DATA_PROC DESC) R
FROM
MS042_LOADING_LOGS LOG JOIN MS116_MEASURE_TYPES TM ON
TM.ID_TIPO_MISURA=LOG.SEZIONE
-- LEFT JOIN(
-- SELECT CUST.RAGIONE_SOCIALE,STR.POD,RSC.DATA_DA, RSC.DATA_A
-- FROM
-- MS038_METERS STR JOIN MS036_REL_SITES_CUSTOMERS RSC ON
-- STR.ID_SITO=RSC.ID_SITO
-- JOIN MS030_CUSTOMERS CUST ON
-- CUST.ID_CLIENTE=RSC.ID_CLIENTE
-- ) C ON
-- C.POD=LOG.POD
--AND LOG.DATA_PROC BETWEEN C.DATA_DA AND C.DATA_A
WHERE
1=1
--AND LOG.DATA_PROC>=TRUNC(SYSDATE)
AND LOG.DATA_PROC>=TRUNC(SYSDATE)-3
--TO_DATE('01/11/2014', 'DD/MM/YYYY')
) TBELENCO
)
WHERE
R BETWEEN 1 AND 200;
If I execute the query with AND LOG.DATA_PROC>=TRUNC(SYSDATE)-3, Oracle uses the index on the data_proc field of the MS042_LOADING_LOGS (LOG) table, if I use, instead, AND LOG.DATA_PROC>=TRUNC(SYSDATE)-4 or -5, or -6, etc, it uses a table access full. Why this behavior?
I also execute a :
ALTER INDEX MS042_DATA_PROC_IDX REBUILD;
but with no changes.
Thank,
Igor
--***********************************************************
SELECT count(*)
FROM
(
SELECT
TBELENCO.DATA_PROC, TBELENCO.POD, TBELENCO.DESCRIZIONE, TBELENCO.ERROR, TBELENCO.STATO,
TBELENCO.SEZIONE, TBELENCO.NOME_FILE, TBELENCO.ID_CARICAMENTO, TBELENCO.ESITO_OPERAZIONE,
TBELENCO.DES_TIPO_MISURA,
ROWNUM R
FROM(
SELECT
LOG.DATA_PROC, LOG.POD, LOG.DESCRIZIONE, LOG.ERROR, LOG.STATO,
LOG.SEZIONE, LOG.NOME_FILE, LOG.ID_CARICAMENTO, LOG.ESITO_OPERAZIONE, TM.DES_TIPO_MISURA
FROM
MS042_LOADING_LOGS LOG JOIN MS116_MEASURE_TYPES TM ON
TM.ID_TIPO_MISURA=LOG.SEZIONE
WHERE
1=1
AND LOG.DATA_PROC>=TRUNC(SYSDATE)-1
) TBELENCO
)
WHERE
R BETWEEN 1 AND 200;
Plan hash value: 2191058229
-------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 30866 (2)| 00:06:11 |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
|* 2 | VIEW | | 94236 | 1196K| 30866 (2)| 00:06:11 |
| 3 | COUNT | | | | | |
|* 4 | HASH JOIN | | 94236 | 1104K| 30866 (2)| 00:06:11 |
| 5 | INDEX FULL SCAN | P087_TIPI_MISURE_PK | 15 | 30 | 1 (0)| 00:00:01 |
| 6 | TABLE ACCESS BY INDEX ROWID| MS042_LOADING_LOGS | 94236 | 920K| 30864 (2)| 00:06:11 |
|* 7 | INDEX RANGE SCAN | MS042_DATA_PROC_IDX | 94236 | | 25742 (2)| 00:05:09 |
-------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("R"<=200 AND "R">=1)
4 - access("TM"."ID_TIPO_MISURA"="LOG"."SEZIONE")
7 - access(SYS_OP_DESCEND("DATA_PROC")<=SYS_OP_DESCEND(TRUNC(SYSDATE#!)-1))
filter(SYS_OP_UNDESCEND(SYS_OP_DESCEND("DATA_PROC"))>=TRUNC(SYSDATE#!)-1)
Plan hash value: 69930686
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 95921 (1)| 00:19:12 |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
|* 2 | VIEW | | 1467K| 18M| 95921 (1)| 00:19:12 |
| 3 | COUNT | | | | | |
|* 4 | HASH JOIN | | 1467K| 16M| 95921 (1)| 00:19:12 |
| 5 | INDEX FULL SCAN | P087_TIPI_MISURE_PK | 15 | 30 | 1 (0)| 00:00:01 |
|* 6 | TABLE ACCESS FULL| MS042_LOADING_LOGS | 1467K| 13M| 95912 (1)| 00:19:11 |
---------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("R"<=200 AND "R">=1)
4 - access("TM"."ID_TIPO_MISURA"="LOG"."SEZIONE")
6 - filter("LOG"."DATA_PROC">=TRUNC(SYSDATE#!)-4)
The larger the fraction of rows that will be returned, the more efficient a table scan is and the less efficient it is to use an index. Apparently, Oracle expects that inflection point to come when the query returns more than 3 days of data. If that is inaccurate, I would expect that the statistics on your table or indexes are inaccurate.
Just starting out with R and trying to figure out what works for my needs when it comes to creating "summary tables." I am used to Custom Tables in SPSS, and the CrossTable function in the package gmodels gets me almost where I need to be; not to mention it is easy to navigate for someone just starting out in R.
That said, it seems like the Hmisc table is very good at creating various summaries and exporting to LaTex (ultimately what I need to do).
My questions are:1)can you create the table below easily in the Hmsic page? 2) if so, can I interact variables (2 in the the column)? and finally 3) can I access p-values of significance tests (chi square).
Thanks in advance,
Brock
Cell Contents
|-------------------------|
| Count |
| Row Percent |
| Column Percent |
|-------------------------|
Total Observations in Table: 524
| asq[, 23]
asq[, 4] | 1 | 2 | 3 | 4 | 5 | Row Total |
-------------|-----------|-----------|-----------|-----------|-----------|-----------|
0 | 76 | 54 | 93 | 46 | 54 | 323 |
| 23.529% | 16.718% | 28.793% | 14.241% | 16.718% | 61.641% |
| 54.286% | 56.250% | 63.265% | 63.889% | 78.261% | |
-------------|-----------|-----------|-----------|-----------|-----------|-----------|
1 | 64 | 42 | 54 | 26 | 15 | 201 |
| 31.841% | 20.896% | 26.866% | 12.935% | 7.463% | 38.359% |
| 45.714% | 43.750% | 36.735% | 36.111% | 21.739% | |
-------------|-----------|-----------|-----------|-----------|-----------|-----------|
Column Total | 140 | 96 | 147 | 72 | 69 | 524 |
| 26.718% | 18.321% | 28.053% | 13.740% | 13.168% | |
-------------|-----------|-----------|-----------|-----------|-----------|-----------|
The gmodels package has a function called CrossTable, which is very nice for those used to SPSS and SAS output. Try this example:
library(gmodels) # run install.packages("gmodels") if you haven't installed the package yet
x <- sample(c("up", "down"), 100, replace = TRUE)
y <- sample(c("left", "right"), 100, replace = TRUE)
CrossTable(x, y, format = "SPSS")
This should provide you with an output just like the one you displayed on your question, very SPSS-y. :)
If you are coming from SPSS, you may be interested in the package Deducer ( http://www.deducer.org ). It has a contingency table function:
> library(Deducer)
> data(tips)
> tables<-contingency.tables(
+ row.vars=d(smoker),
+ col.vars=d(day),data=tips)
> tables<-add.chi.squared(tables)
> print(tables,prop.r=T,prop.c=T,prop.t=F)
================================================================================================================
==================================================================================
========== Table: smoker by day ==========
| day
smoker | Fri | Sat | Sun | Thur | Row Total |
-----------------------|-----------|-----------|-----------|-----------|-----------|
No Count | 4 | 45 | 57 | 45 | 151 |
Row % | 2.649% | 29.801% | 37.748% | 29.801% | 61.885% |
Column % | 21.053% | 51.724% | 75.000% | 72.581% | |
-----------------------|-----------|-----------|-----------|-----------|-----------|
Yes Count | 15 | 42 | 19 | 17 | 93 |
Row % | 16.129% | 45.161% | 20.430% | 18.280% | 38.115% |
Column % | 78.947% | 48.276% | 25.000% | 27.419% | |
-----------------------|-----------|-----------|-----------|-----------|-----------|
Column Total | 19 | 87 | 76 | 62 | 244 |
Column % | 7.787% | 35.656% | 31.148% | 25.410% | |
Large Sample
Test Statistic DF p-value | Effect Size est. Lower (%) Upper (%)
Chi Squared 25.787 3 <0.001 | Cramer's V 0.325 0.183 (2.5) 0.44 (97.5)
-----------
================================================================================================================
You can get the counts and test to latex or html using the xtable package:
> library(xtable)
> xtable(drop(extract.counts(tables)[[1]]))
> test <- contin.tests.to.table((tables[[1]]$tests))
> xtable(test)