Pivot-traspose a table PL SQL - plsql

I have this table:
and I would like to have this output:
Could be possible? the problem is that I don't know how many values have the table.
Thank you very much.

Related

How to change the value in a group of cells?

I'm wondering if it is possible to change the contents of multiple cells in a table using R?
Consider this example: Example
I need to change the values 'Femini.' to 'Feminine'. The problem is that i have a great number of cells to change... Is there some command that help me doing this?
Thanks for the help,
Luís
Say your dataframe is called df
df$Genre[df$Genre == 'Femini'] <- 'Feminine'

Selecting rows in sqlite based on date

I have a table of which one column holds dates in this format '04/17/2014'.
I want to select rows of the table based on time. I'm trying to get all rows after a certain date. After reading posts here I tried the following, which doesn't seem to work. I get a lot of rows from 2013 back with this query. Can anybody help?
select Value_Date from Table_Outstanding
where VALUE_DATE > '04/12/2014'
This did it. Thanks everybody.
select * from Table_Outstanding
where strftime('%m/%d/%Y', VALUE_DATE) > '04/12/2014'

Better "delete rows from table" performance

I have an RDF Graph in Oracle that has approx. 7 ,000, 000 triples(rows)
I have a simple select statement that get's old duplicates (triples) and it deletes them from this RDF Graph.
Now,
let's say my SELECT returns 300 results,
this gets computationally very expensive since the DELETE does a full scan of the TEST_tpl table 300 times and as I said TEST_tpl has approx.
7 ,000, 000 rows...
DELETE FROM TEST_tpl t WHERE t.triple.get_subject()
IN
(
SELECT rdf$stc_sub from rdf_stage_table_TEST
WHERE rdf$stc_pred LIKE '%DateTime%'
)
I am trying to find the way to create an oracle procedure that would go through table only once for multiple values...
Or maybe someone knows of a better way...
The way I solved this is I created an INDEX on triple.get_subject()
CREATE INDEX "SEMANTIC"."TEST_tpl_SUB_IDX"
ON
"SEMANTIC"."TEST_tpl" ("MDSYS"."SDO_RDF_TRIPLE_S"."GET_SUBJECT"("TRIPLE"))
This improved the performance tremendously.
Thank you #Justin Cave and # Michael for your help.

Use LINQ to Total Columns

I'm trying to get LINQ SQL to grab and total this data, I'm having a heck of a time trying to do it too.
Here is my code, that doesn't error out, but it doesnt total the data.
' Get Store Record Data
Dim MyStoreNumbers = (From tnumbers In db.Table_Numbers
Where tnumbers.Date > FirstOfTheMonth
Select tnumbers)
I'm trying to create a loop that will group the data by DATE and give me the totals so I can graph it.
As you can see, I'd like to set totals for Internet, TV, Phone, ect... Any help would be great, thank you!
You can group and total the numbers one by one, like this:
From tnumbers In db.Table_Numbers
Where tnumbers.Date > FirstOfTheMonth
Group by tnumbers.Date
Into TotalPhone = sum(tnumbers.Phone)
Select Date, TotalPhone
Here is a link with explanations of this subject from Microsoft.
Edit: added grouping

RSQLite Faster Subsetting of large Table?

So I have a large dataset (see my previous question) where I need to subset it based on an ID which I have in another table
I use a statement like:
vars <- dbListFields(db, "UNIVERSE")
ids <- dbGetQuery(db, "SELECT ID FROM LIST1"
dbGetQuery(db,
paste("CREATE TABLE SUB1 (",
paste(vars,collapse=" int,"),
")"
) )
dbGetQuery(db,
paste("INSERT INTO SUB1 (",
paste(vars,collapse=","),
") SELECT * FROM UNIVERSE WHERE
UNIVERSE.ID IN (",
paste(t(ids),collapse=","),
")"
) )
The code runs (I may have missed a parenthesis above) but it takes a while since my table UNIVERSE is about 10 gigs in size. The major problem is I'm going to have to run this for many different tables "LIST#" to make "SUB#" and the subsets are not disjoint so I can't just delete the record from UNIVERSE when I'm done with it.
I'm wondering if I've gone about subsetting the wrong way or if there's other ways I can speed this up?
Thanks for the help.
This is kind of an old question and I don't know if you found the solution or not. If UNIVERSE.ID is a unique, non-NULL integer, setting it up as an 'INTEGER PRIMARY KEY' should speed things up a lot. There's some code and discussion here:
http://www.mail-archive.com/r-sig-db%40stat.math.ethz.ch/msg00363.html
I don't know if using an inner join would speed things up or not; it might be worth a try too.
Do you have an index on UNIVERSE.ID? I'm no SQLite guru, but generally you want fields that you are going to query on to have indexes.

Resources