How to access csv files from sqlite? - sqlite

I have a CSV file called: "products.csv".
This file contains products and the first line contains the columns names.
My question is how to create a view in sqlite that access to this file and use it as a normal view such as:
SELECT ID, NAME FROM PRODUCTS
I'm looking for a solution without importing data to a table.

While the better/more efficient approach would be to import the CSV file into a real table (Which is easy to do with sqlite), the CSV extension module does pretty much exactly what you want.

Related

way to mass-add columns in sqlite studio?

I'm new to creating databases, and right now all I want to do is import a csv file into an empty sqlite3 database using sqlite studio. I created an extremely basic table with only a single unnamed empty column, and then attempted to import my file into that table; however, I keep getting an error saying that my table has less columns than the file, and any extra columns will be ignored. I'd really like not to have to create 52 dummy columns; is there some kind of way to work around this?
Skip table creatioon by yourself. Import into inexisting table and SQLiteStudio will create it for you, with all columns required.

Read a csv file and insert the values in mysql database using R

I am able to read a csv file using read function, i now want to insert the values into a table in mysql database,i have to make it dynamic so that if the content of the csv changes it can insert stil.
Your post is very subjective. I advise you to go in parts, read the dplyr documentation.
I use dplyr for persistence in mysql database. This a powerfull packpage.
https://shiny.rstudio.com/articles/pool-dplyr.html

How to avoid cartesian-product in a cypher query and still create links between objects?

I imported a table with thousands of Equipments. Then imported another table with types of equipments, which contain around 20 types.
When I wrote the cypher query below to associate them, Neo4j warned me about a cartesian product. Is there a better way to create the associations? Should I have done it during the CSV import?
MATCH (te:Equipment_Type),(e:Equipment)
WHERE te.type_id = e.type_id
CREATE (e)-[:TYPE_OF]→(te)
Update
I tryed what Brian sugested, during the CSV import, and worked like a charm.
Imported the Equipment Types first;
Then created and index on Equipment(type_id);
Modified the code to search during CSV import.
From Neo4j Console:
Added 100812 labels, created 100812 nodes, set 414307 properties,
created 100812 relationships, statement executed in 33902 ms.
The Code:
CREATE INDEX ON :Equipment(type_id)
USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "http://localhost/Equipments.csv" AS row
MERGE (e:Equipment {eqp_id: row.eqp_id, name: row.name, type_id: row.type_id})
WITH e, row
MATCH (te:Equipemnt_Type)
WHERE te.type_id = row.type_id
CREATE (e)-[:TYPE_OF]->(te)
With the size of data that you're talking about it's not a big deal, especially if you have indexes on Equipment_Type:type_id and Equipment:type_id. It's warning you because a cartesian project in a query can seem quick when you first write it on a small dataset and then grow quickly as you get more data.
But yes, creating the relationships during the CSV import would be the best way to approach it, probably.

export result into excel sheet from teradata sql assistant

I want to export the results into excel sheet by running the query in Teradata SQL Assistant.
I used copy paste but it didnt work
Thanks in advance.
If you return the answers to SQL Assistant you should be able to select Save Answerset from the File menu. You will then have the option to save it as a proper Excel file format.
If you export the answers to a flat file directly the delimited text file can in turn be opened with ease in Excel and then saved as a proper Excel file format (XLS, XLSX, etc.)
Select the whole excel worksheet you will paste into and set the number format to 'text'.
Now you can safely copy the data from the teradata sql assistant's query results and paste them into the spreadsheet.

SQLite Select Last Occurance of Character

I'm using SQLite, and I'm unable to find a way to locate the index of the last occurrence of a character. For example, the records that I need to parse are:
test123.contoso.txt
testABC.contoso.atlanta.docx
another.test.vb
I would appreciate if anybody can point me in the direction how I can parse the file extensions (txt, docx, vb) from these records through a SQLite query. I've tried using the REVERSE function, but unfortunately SQLite doesn't include this in it's toolbox.
You can adapt the solution in How to get the last index of a substring in SQLite? to extract the extension.
select distinct replace(file, rtrim(file, replace(file, '.', '')), '') from files;
If you want to check whether a file name has a specific extension, you can use LIKE:
... WHERE FileName LIKE '%.txt'
However, it is not possible with the built-in functions to extract the file extension.
If you need to handle the file extension separately, you should store it separately in the database, too.

Resources