I've a table like below,
| id | Name |
| 1 | foo |
| 2 | bar |
I want to write a select query which should return some prep-ended text before the id. So my output should be something like, val_1 & val_2.
I couldn't see any concat method in web2py select query. To achieve my requirement I need to manipulate the result separately. Is there a way to form the select query in web2py to use SQL concat?
The .select() method can take SQL expressions as arguments in addition to fields, so you can do:
val = "'val_' || mytable.id"
rows = db(db.mytable).select(val)
print rows[0][val]
Note, when using an expression in the select, the resulting value is stored in the row object with a key equivalent to the SQL expression itself, hence the use of [val] to extract the value from the row object.
As an alternative to the above approach, you might instead consider using a computed field or a virtual field.
Related
I am using Nebula Graph to store a graph of multiple nodes. For example:
I have TAG named Entity has one attribute name, an EDGE named call with no attributes. I inserted many vertices of type Entity and they have edge of type call between them.
I want to query my graph for a specific vertex. I have only its name I do not know the id under which it was inserted.
I read the manual of ngql and I went over the usage of "Go from" statement I was not able to find a way to do the query starting from the attribute value of the vertex.
Can anyone help me in that? :
I want to do this : find the vertex id that has name = "x".
CREATE {TAG | EDGE} INDEX [IF NOT EXISTS] <index_name> ON {<tag_name> | <edge_name>} (prop_name_list)
LOOKUP ON {<vertex_tag> | <edge_type>} WHERE <expression> [ AND | OR expression ...]) ] [YIELD <return_list>]
For example, in your case, assume you have a tag entity, there are two properties in it, name and age. If you want to know the vertex ID with name Amber, the query is like the following:
First, you build an index on entity:
CREATE TAG entity(name string, age int);
CREATE TAG INDEX entity_index ON entity(name, age);
INSERT VERTEX entity(name, age) VALUES 101:("Amber", 21);
LOOKUP ON entity WHERE entity.name == "Amber";
============
| VertexID |
============
| 101 |
------------
If you don't specify keyword YIELD, vertex ID is returned by default. Do let me know if this helps.
Note:
Create tag first then the index. (Because rebuild index is not supported right now.)
You insert the data after the index is created.
I have written the following code which extracts the names of tables which have used storage in Sentinel. I'm not sure if Kusto is capable of doing the query but essentially I want to use the value stored in "Out" as names of tables. e.g union(Out) or search in (Out) *
let MyTables =
Usage
| where Quantity > 0
| summarize by DataType ;
let Out =
MyTables
| summarize makelist(DataType);
No, this is not possible. The tables referenced by the query should be known during query planning. A possible workaround can be generating the query and invoking it using the execute_query plugin.
I have a custom property in my appInsights telemetry that is a json array of a key/value pairs. What I want to do is project out that key/value pair and it seems that using parsejson and mvexpand together is how to achieve this; however, I seem to be missing something. The end result of my expression is a column named type that is the raw json. Attempting to add any property to the expression results in an empty column.
Json encoded property
[{"type":"text/xml","count":1}]
AIQL
requests
| project customDimensions
| extend type=parsejson(customDimensions.['Media Types'])
| mvexpand bagexpansion=array type
Update 6/30/17
To answer EranG's question the output of my request when projecting out the properties as columns is as shown below.
I had the same issue recently. Probably your property already of type dynamic, but its dynamic String not the array. parsejson don't work because it converts String to dynamic, not dynamic to another dynamic. To work around this I suggest you to try first convert your property to String and then parse it again.
Please, try following example. It may help you as it helped me:
requests
| project customDimensions
| extend type=parsejson(tostring(customDimensions.['Media Types']))
| mvexpand type
| project type.type, type.['count']
What mvexpand does is to take your array and break it down to lines, so each line will have a single item from the array.
If you want to break each item to columns, you'll need to try something like:
requests
| project customDimensions
| extend type=parsejson(customDimensions.['Media Types'])
| mvexpand bagexpansion=array type
| project type = type.type, count_ = type["count"]
In case you have several tables inside any sqlite database how could the get the information that they have an auto increment primary key or not?
For instance I am already aware that you could get some info concerning the columns of a table by simply querying this: pragma table_info(tablename_in_here)
It would be much better to get the auto increment column dynamically rather than setting up each corresponding model inside the source code with a boolean value.
Edit:
Let me use this table as an example:
CREATE TABLE "test" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
"name" TEXT NOT NULL
)
and this is the result table after executing pragma table_info("test")
cid | name | type | notnull | dflt_value | pk
0 | id | INTEGER | 1 | null | 1
1 | name | TEXT | 1 | null | 0
As you can see there is no information whether the id column is autoincrement or not
Edit2:
I looking for a solution that involves sqlite directly through a statement.
Special situations where the sqlite3 command in the terminal can be used to somehow parse the required information from inside are not acceptable. They do not work in situations where you are not allowed to execute commands in a terminal programmatically. Like in an Android app.
Autoincrementing primary keys must be declared as INTEGER PRIMARY KEY or some equivalent, so you can use the table_info date to detect them.
A column is an INTEGER PRIMARY KEY column if, in the PRAGMA table_info output,
the type is integer or INTEGER or any other case-insensitive variant; and
pk is set; and
pk is not set for any other column.
To check whether the column definition includes the AUTOINCREMENT keyword, you have to look directly into the sqlite_master table; SQLite has no other mechanism to access this information.
If this query returns a record, you have the AUTOINCREMENT keyword somewhere in the table definition (which might return a wrong result if this word is commented out):
SELECT 1
FROM sqlite_master
WHERE type = 'table'
AND name = 'tablename_in_here'
AND sql LIKE '%AUTOINCREMENT%'
You can parse the output of .schema. That will give you the sql commands as you used them to create your tables. If autoincrement was declared, you will see it in the output. This has the advantage that it will list all your tables too.
The architecture for this scenario is as follows:
I have a table of items and several tables of forms. Rather than having the forms own the items, the items own the forms. This is because one item can be on several forms (although only one of each type, but not necessarily on any). The forms and items are all tied together by a common OrderId. This can be represented like so:
OrderItems | Form A | Form B etc....
---------- |--------- |
ItemId |FormAId |
OrderId |OrderId |
FormAId |SomeField |
FormBId |OtherVar |
FormCId |etc...
This works just fine for these forms. However, there is another form, (say, FormX) which cannot have an OrderId because it consists of items from multiple orders. OrderItems does contain a column for FormXId as well, but I'm confused about the best way to get a list of the "FormX"s related to a single OrderId. I'm using MySQL and was thinking maybe a stored proc was the best way to go on this, but I've never used a stored proc on MySQL and don't really know the best way to go about it. My other (kludgy) option was to hit the DB twice, first to get all the items that are for the given OrderId that also have a FormXId, and then get all their FormXIds and do a dynamic SELECT statement where I do something like (pseudocode)
SELECT whatever FROM sometable WHERE FormXId=x OR FormXId=y....
Obviously this is less than ideal, but I can't really think of any other way... anything better I could do either programmatically or architecturally? My back-end code is ASP.NET.
Thanks so much!
UPDATE
In response to the request for more info:
Sample input:
OrderId = 1000
Sample output
FormXs:
-----------------
FormXId | FieldA | FieldB | etc
-------------------------------
1003 | value | value | ...
1020 | ... .. ..
1234 | .. . .. . . ...
You see the problem is that FormX doesn't have one single OrderId but is rather a collection of OrderIds. Sometimes multiple items from the same order are on FormX, sometimes it's just one, most orders don't have any items on FormX. But when someone pulls up their order, I need for all the FormXs their items belong on to show up so they can be modified/viewed.
I was thinking of maybe creating a stored proc that does what I said above, run one query to pull down all the related OrderIds and then another to return the appropriate FormXs. But there has to be a better way...
I understand you need to get a list of the "FormX"s related to a single OrderId. You say, that OrderItems does contain a column for FormXId.
You can issue the following query:
select
FormX.*
From
OrderItems
join
Formx
on
OrderItems.FormXId = FormX.FormXId
where
OrderItems.OrderId = #orderId
You need to pass #orderId to your query and you will get a record set with FormX records related to this order.
You can either package this query up as a stored procedure using #orderId paramter, or you can use dynamic sql and substitute #orderId with real order number you executing your query for.