I'm trying to get some urls using post method in tcl 8.0.
it doesn't print any output.
here is chunk of my code.
foreach sKey [array names aQuery] {
set sValue $aQuery($sKey)
append sQueryString "[::http::formatQuery $sKey $sValue]&"
}
set sQueryString [string trim $sQueryString "&"]
set sToken [::http::geturl $sUrl -query $sQueryString -channel stdout]
::http::wait $sToken
upvar #0 $sToken state
foreach sKey [array names state] {
puts "$sKey $state($sKey)"
}
Upgrade already. Why the heck are you using a version that was released in the last millenium.
The http::formatQuery procedure takes one or more key value pairs as arguments so that part would be better rendered as below. If in doubt its probably better to avoid using the -channel option and check for the status yourself. So something like:
set query [eval ::http::formatQuery [array get aQuery]]
set tok [http::geturl $sUrl -query $query -timeout 10000]
http::wait $tok
if {![string compare [http::status] "ok"]} {
puts [http::data $tok]
} else {
puts stderr [http::error $tok]
}
http::cleanup $tok
Note that in more recent versions of tcl you could have used [http::status] eq "ok" or [string equal [http::status] "ok"]. Don't forget to cleanup the http token. If you are doing this in a GUI program, use the -command option and do all the work in the callback so you don't freeze the UI while doing the http::wait.
Related
I am creating a composer from terraform where I want to pass a json as input variable
Terraform code:
software_config{
env_variables{
AIRFLOW_VAR_MYJSON ="{'__comment1__': 'This the global section', 'project_id':'testproject', 'gce_zone':'us-east1-c', 'gce_region':'us-east1','networkname':'vpc1', 'subnetwork':'https://www.googleapis.com/compute/v1/projects/testproject/regions/us-east1/subnetworks/subnet1'}"
}
}
I am trying to read the value of AIRFLOW_VAR_MYJSON in DAG , but it is not working as the value is not recognized as JSON.
I tried converting it and then deserializing it with following code:
JSONList = Variable.get("MYJSON")
jsonvar = json.dumps(JSONList)
setting_var = Variable.set("settings", jsonvar)
dag_config = Variable.get("settings", deserialize_json=True)
but it is not working.
I have also tried using
dag_config =json.loads(jsonvar)
then reading value as
project_id = dag_config["project_id"]
but I get error : "string indices must be integers"
Please suggest a way to resolve this.
NOTE : I know the gcloud command to set variables from json file but that is not working in my case as the project is in VPC and kubernetes clusters are giving timeout or handshake error, so I have ruled out use of this option
Valid JSON can only be " not '. Try switching the quotes.
A value can be a string in double quotes, or a number, or true or false or null, or an object or an array.
software_config{
env_variables{
AIRFLOW_VAR_MYJSON ="{\"__comment1__\": \"This the global section\", \"project_id\":\"testproject\", \"gce_zone\":\"us-east1-c\", \"gce_region\":\"us-east1\",\"networkname\":\"vpc1\", \"subnetwork\":\"https://www.googleapis.com/compute/v1/projects/testproject/regions/us-east1/subnetworks/subnet1\"}"
}
}
Or a little nicer way:
software_config {
env_variables {
AIRFLOW_VAR_MYJSON = jsonencode({
"__comment1__" = "This the global section",
"project_id" = "testproject",
"gce_zone" = "us-east1-c",
"gce_region" = "us-east1",
"networkname" = "vpc1",
"subnetwork" = "https://www.googleapis.com/compute/v1/projects/testproject/regions/us-east1/subnetworks/subnet1",
})
}
}
In an unit test, I need to verify that the program skip locked records when processing a table.
I have been unable to setup a locked records because the test can't lock itself which make a lot of sense.
Here is a sample of what I'm trying to achieve.
DEV VAR v_isCommitted AS LOGI NO-UNDO.
DEF VAR hl AS HANDLE NO-UNDO.
DEF BUFFER bufl FOR tablename.
hl = BUFFER bufl:HANDLE.
LOCKED_RECORDS:
DO TRANSACTION ON ERROR UNDO, LEAVE LOCKED_RECORDS:
/*Setup : Create record not committed yet*/
CREATE tablename.
ASSIGN tablename.fields = fieldsvalue.
/*ACT : Code I'm trying to test*/
/*...some code...*/
v_isCommitted = hl:FIND-BY-ROWID(ROWID(tablename), EXCLUSIVE-LOCK, NO-WAIT)
AND AVAILABLE(bufl)
AND NOT LOCKED(bufl).
/*...some code touching the record if it is commited...*/
/*ASSERT : program left new record tablename AS IS.*/
END.
The problem is that the record is available and not locked to the test because it was created by it.
Is there a way I could have the test lock a record from itself so the act part can actually skip the record like it was created by someone else?
Progress: 11.7.1
A session can not lock itself. So you will need to start a second session. For example:
/* code to set things up ... */
/* spawn a sub process to try to lock the record */
os-command silent value( substitute( '_progres -b -db &1 -p lockit.p -param "&2" && > logfile 2>&&1', dbname, "key" )).
In lockit.p use session:parameter to get the key for the record to test (or hard code it I suppose).
Or, as mentioned in the comments below:
/* locktest.p
*/
define variable lockStatus as character no-undo format "x(20)".
find first customer exclusive-lock.
input through value( "_progres /data/sports120/sports120 -b -p ./lockit.p" ).
repeat:
import unformatted lockStatus.
end.
display lockStatus.
and:
/* lockit.p
*/
find first customer exclusive-lock no-wait no-error.
if locked( customer ) then
put "locked".
else
put "not locked".
quit.
I'm using System.Data.SQLite ADO.NET provider for SQLite and the following Powershell code to execute queries (and nonqueries) against a Sqlite3 DB:
Function Invoke-SQLite ($DBFile,$Query) {
try {
Add-Type -Path ".\System.Data.SQLite.dll"
}
catch {
write-warning "Unable to load System.Data.SQLite.dll"
return
}
if (!$DBFile) {
throw "DB Not Found" R
Sleep 5
Exit
}
$conn = New-Object System.Data.SQLite.SQLiteConnection
$conn.ConnectionString="Data Source={0}" -f $DBFile
$conn.Open()
$cmd = $Conn.CreateCommand()
$cmd.CommandText = $Query
#$cmd.CommandTimeout = 10
$ds = New-Object system.Data.DataSet
$da = New-Object System.Data.SQLite.SQLiteDataAdapter($cmd)
[void]$da.fill($ds)
$cmd.Dispose()
$conn.Close()
write-host ("{0} Row(s) returned " -f ($ds.Tables[0].Rows|Measure-Object|Select -ExpandProperty Count))
return $ds.Tables[0]
}
The problem is: while it is trivial to know how many rows have been SELECTed in a query operation, the same is not true if the operation is an INSERT,DELETE or UPDATE (nonqueries)
I know I could use the ExecuteNonQuery method, but i need a generic wrapper which returns number of affected rows while being agnostic about the query it executed (as Invoke-SQLCmd would do, for example)
Is that possible?
Thanks!
A few comments before the answer:
System.data.Sqlite supports executing multiple SQL statements for one command, as long as the CommandText has each valid statements delimited by a semicolon (;). This means that there could be a mixture of queries and DML statements (i.e. INSERT, UPDATE, DELETE). The fact that you do not want to distinguish between the type of statement in $Query tells me that you are likely just passing statements blindly, so it could contain any combination of statements. Simply getting only one value (whether from a query or DML) seems too limiting.
Using a DataAdapter to fill a dataset just to get counts is inefficient. Instead, it may be better to just get a DataReader object and count the returned rows. This also allows a separate count for each query statement to be retrieved, something that gets obscured by using the DataAdapter object. (Perhaps enumerating all tables in the resultant dataset could get the same number, but I'm not certain that would always be equivalent.)
One good thing is that if you insist on using a DataAdapter, it will still execute DML statements (even though the expected result is query that returns rows). The dataset will not be changed (filled), but all statements in the command text will still affect changes in the database, so the following solution will still be useful.
Even if the code had works, I assume that the line which prints "{0} Rows returned" is meant to get a simple count, but $ds.Tables[0].Rows needs to be $ds.Tables[0].Rows.Count.
Notes about this particular solution:
The key is to call either of the sqlite SQL functions changes() or total_changes(). These can be retrieved using SQL: SELECT total_changes();. I recommend getting total_changes() before and after a command, then subtracting the difference. That will get changes for multiple statements executed by one command.
I'm not a PowerShell guru, so I tested everything in C#. Treat the code below more as pseudo code since it may need tweaking.
The code:
$conn = New-Object System.Data.SQLite.SQLiteConnection
try {
$conn.ConnectionString="Data Source={0}" -f $DBFile
$conn.Open()
$cmdCount = $Conn.CreateCommand()
$cmd = $Conn.CreateCommand()
try {
$cmdCount.CommandText = "SELECT total_changes();"
$beforeChanges = $cmdcount.ExecuteScalar()
$cmd.CommandText = $Query
$ds = New-Object System.Data.DataSet
$da = New-Object System.Data.SQLite.SQLiteDataAdapter($cmd)
$rows = 0
try {
[void]$da.fill($ds)
foreach ($tbl in $ds.Tables) {
$rows += $tbl.Rows.Count;
}
} catch {}
$afterChanges = $cmdcount.ExecuteScalar()
$DMLchanges = $afterChanges - $beforeChanges
$totalRowAndChanges = $rows + $DMLchanges
# $ds.Tables[0] may or may not be valid here.
# If query returned no data, no tables will exist.
} finally {
$cmdCount.Dispose()
$cmd.Dispose()
}
} finally {
$conn.Dispose()
}
Alternatively, you could eliminate the DataAdapter:
$cmd.CommandText = $Query
$rdr = $cmd.ExecuteReader()
$rows = 0
do {
while ($rdr.Read()) {
$rows++
}
} while ($rdr.NextResult())
$rdr.Close();
I am using redis2-nginx-module to serve html content stored as a value in redis. Following is the nginx config code to get value for a key from redis.
redis2_query get $fullkey;
redis2_pass localhost:6379;
#default_type text/html;
When the url is hit the following unwanted response is rendered along with the value for that key.
$14
How to remove this unwanted output? Also if key passed as an argument doesn't exist in the redis, how to check this condition and display some default page?
(Here's a similar question on ServerFault)
There's no way with just redis2 module, as it always return a raw Redis response.
If you only need GET and SET commands you may try with HttpRedisModule (redis_pass). If you need something fancier, like hashes, you should probably try filtering the raw response from Redis with Lua, e.g. something along the lines of
content_by_lua '
local res = ngx.location.capture("/redis",
{ args = { key = ngx.var.fullkey } }
)
local body = res.body
local s, e = string.find(body, "\r\n", 1, true)
ngx.print(string.sub(body, e + 1))
';
(Sorry, the code's untested, don't have an OpenResty instance at hand.)
Is it possible to make http GET requests from within a node-red "function" node.
If yes could somebody point me to some example code please.
The problem I want to solve is the following:
I want to parse a msg.payload with custom commands. For each command I want to make an http request and replace the command with the response of a HTTP GET request.
expl:
msg.payload = "Good day %name%. It's %Time% in the %TimeOfDay%. Time for your coffee";
The %name%,%TimeOfDay% and %Time% should be replaced by the content of a Get request to http://nodeserver/name,..., http://nodeserver/Time.
thnx Hardilb,
After half a day searching I found out that the http-node can also be configured by placing a node just before it setting the
msg.url = "http://127.0.0.1:1880/" + msg.command ;
msg.method = "GET";
I used the following code to get a list of commands
var parts = msg.payload.split('%'),
len = parts.length,
odd = function(num){return num % 2;};
msg.txt= msg.payload;
msg.commands = [];
msg.nrOfCommands = 0;
for (var i = 0; i < len ; i++){
if(odd(i)){
msg.commands.push(parts[i]);
msg.nrOfCommands = msg.nrOfCommands + 1;
}
}
return msg;
You should avoid doing asynchronous or blocking stuff in function nodes.
Don't try to do it all in one function node, chain multiple function nodes with multiple http Request nodes to build the string up a part at a time.
You can do this by stashing the string in another variable off the msg object instead of payload.
One thing to looks out for is that you should make sure you clear out msg.headers before each call to the next http Request node