libxively C API frequently does nothing - xively

I'm trying to use libxively to update my feed, but it frequently seems to do nothing. I've got a basic call:
{
xi_datastream_t& ds = mXIFeed.datastreams[2];
::xi_str_copy_untiln(ds.datastream_id, sizeof (ds.datastream_id), "cc-output-power", '\0');
xi_datapoint_t& dp = ds.datapoints[0];
ds.datapoint_count = 1;
::xi_set_value_f32(&dp, mChargeController->outputPower());
}
const xi_context_t* ctx = ::xi_nob_feed_update(mXIContext, &mXIFeed);
it logs the following:
[io/posix/posix_io_layer.c:182 (posix_io_layer_init)] [posix_io_layer_init]
[io/posix/posix_io_layer.c:191 (posix_io_layer_init)] Creating socket...
[io/posix/posix_io_layer.c:202 (posix_io_layer_init)] Socket creation [ok]
Once or twice I saw my Xively developer page show a GET feed, but otherwise, nothing seems to get written. Any suggestions on what I should look at?
I tried to rebuild the library using blocking calls (would be nice if nob didn't mean no blocking calls), but I couldn't figure out how to build it.
Thanks!
EDIT:
I was able to build a synchronous version of the library, and that seems to work. Can anyone verify that the async version works? Is there more to it than simply calling xi_nob_feed_update()?
EDIT 2:
I tried running the async example, but I'm doing something wrong, as it always complains of no data received:
$ bin/asynch_feed_update <my key> <my feed ID> example 1 example 4 example 20 example 58 example 11 example 17
example: 1 7
example: 4 7
example: 20 7
example: 58 7
example: 11 7
example: 17 7
[io/posix_asynch/posix_asynch_io_layer.c:165 (posix_asynch_io_layer_init)] [posix_io_layer_init]
[io/posix_asynch/posix_asynch_io_layer.c:174 (posix_asynch_io_layer_init)] Creating socket...
[io/posix_asynch/posix_asynch_io_layer.c:185 (posix_asynch_io_layer_init)] Setting socket non blocking behaviour...
[io/posix_asynch/posix_asynch_io_layer.c:203 (posix_asynch_io_layer_init)] Socket creation [ok]
No data within five seconds.

The asynchronous version should work. The xi_nob_feed_update() is the right function to make a feed update request.
You have to call process_xively_nob_step() in a loop just after select().
In general, you should follow the asynchronous example.

Related

Running multiple xdtool commands from activateResult

I'm creating a gnome shell extension and implementing the search provider. In the activateResult method I want to run some code like
GLib.spawn_command_line_sync('xdotool windowactivate ' + window_id);
GLib.spawn_command_line_sync('xdotool key "ctrl+r"');
GLib.spawn_command_line_sync('xdotool type ' + some_text);
The problem is that only the first command works, and I get some errors like:
Jul 27 20:05:09 comp org.gnome.Shell.desktop[3334]: Window manager warning: Received a NET_CURRENT_DESKTOP message from a broken (outdated) client who sent a 0 timestamp
Jul 27 20:05:09 comp org.gnome.Shell.desktop[3334]: Window manager warning: Buggy client sent a _NET_ACTIVE_WINDOW message with a timestamp of 0 for 0x2e00001 (somestuff)
Jul 27 20:05:09 comp org.gnome.Shell.desktop[3334]: Window manager warning: last_focus_time (93207838) is greater than comparison timestamp (93207584). This most likely represents a buggy client sending inaccurate timestamps in messages such as _NET_ACTIVE_WINDOW. Trying to work around...
Jul 27 20:05:09 comp org.gnome.Shell.desktop[3334]: Window manager warning: last_user_time (93207838) is greater than comparison timestamp (93207584). This most likely represents a buggy client sending inaccurate timestamps in messages such as _NET_ACTIVE_WINDOW. Trying to work around...
Jul 27 20:05:09 comp org.gnome.Shell.desktop[3334]: Window manager warning: 0x2e00001 (somestuff) appears to be one of the offending windows with a timestamp of 93207838. Working around..
One thing I tried was to combine all the xdotool commands with bash -c "... ... ..." with no luck.
After selecting a search result, how can I switch to a window and simulate key presses?
(I'm brand new to gnome stuff, gjs stuff, and even JS, but, do write python daily)
edit: Just tried spawn_command_line_async and it works. Feels sloppy, someone with more experienced might have a better answer.
If you are writing a GNOME Shell extension, it's worth noticing that there are a number of GNOME platform libraries that you can rely on being available.
For simulating mouse and keyboard event, Atspi will work in most situations. As a simple example of simulating pressing the a key:
const Atspi = imports.gi.Atspi;
Atspi.generate_keyboard_event(0, 'a', Atspi.KeySynthType.STRING);
Activating a window, on the other hand, is probably something best done with the built-in window manager functions, however these aren't documented and you'd have to inspect the GNOME Shell JavaScript source or the source of another extension that does this.

First token could not be read or is not the keyword 'FoamFile' in OpenFOAM

I am a beginner to programming. I am trying to run a simulation of a combustion chamber using reactingFoam.
I have modified the counterflow2D tutorial.
For those who maybe don't know OpenFOAM, it is a programme built in C++ but it does not require C++ programming, just well-defining the variables in the files needed.
In one of my first tries I have made a very simple model but since I wanted to check it very well I set it to 60 seconds with a 1e-6 timestep.
My computer is not very powerful so it took me for a day aprox. (by this I mean I'd like to find a solution rather than repeating the simulation).
I executed the solver reactingFOAM using 4 processors in parallel using
mpirun -np 4 reactingFOAM -parallel > log
The log does not show any evidence of error.
The problem is that when I use reconstructPar it works perfectly but then I try to watch the results with paraFoam and this error is shown:
From function bool Foam::IOobject::readHeader(Foam::Istream&)
in file db/IOobject/IOobjectReadHeader.C at line 88
Reading "mypath/constant/reactions" at line 1
First token could not be read or is not the keyword 'FoamFile'
I have read that maybe some files are empty when they are not supposed to be so, but I have not found that problem.
My 'reactions' file have not been modified from the tutorial and has always worked.
edit:
Sorry for the vague question. I have modified it a bit.
A typical OpenFOAM dictionary file always contains a Foam::Istream named FoamFile. An example from a typical system/controlDict file can be seen below:
FoamFile
{
version 2.0;
format ascii;
class dictionary;
location "system";
object controlDict;
}
During the construction of the dictionary header, if this Istream is absent, OpenFOAM ceases its operation by raising an error message that you have experienced:
First token could not be read or is not the keyword 'FoamFile'
The benefit of the header is possibly to contribute OpenFOAM's abstraction mechanisms, which would be difficult otherwise.
As mentioned in the comments, adding the header entity almost always solves this problem.

How can I interact with the Github API using Perl6?

I want to use the Github API in a script and I want to use it as an experience to get better using Perl6. However, I cannot even get a simple proof of concept to work.
Through some testing I realized that Github requires that you supply a valid user agent so I turned to HTTP::UserAgent. No matter what I try, I get the following error:
Internal Error: 'server returned no data'
in block at /Applications/Rakudo/share/perl6/site/sources/FD28A8E22DFE16B70B757D9981C7B6C25543060C (HTTP::UserAgent) line 259
in any at /Applications/Rakudo/share/perl6/site/precomp/F91BAB44DF15C5C298C627DD5E0F9D819ED79939.1517344679.60204/FD/FD28A8E22DFE16B70B757D9981C7B6C25543060C line 1
in method new at /Applications/Rakudo/share/perl6/site/sources/DDDD4497B34FC81BC1F5FF467999BC4DA2FA1CEB (HTTP::Response) line 25
in method get-response at /Applications/Rakudo/share/perl6/site/sources/FD28A8E22DFE16B70B757D9981C7B6C25543060C (HTTP::UserAgent) line 291
in method request at /Applications/Rakudo/share/perl6/site/sources/FD28A8E22DFE16B70B757D9981C7B6C25543060C (HTTP::UserAgent) line 159
in method get at /Applications/Rakudo/share/perl6/site/sources/FD28A8E22DFE16B70B757D9981C7B6C25543060C (HTTP::UserAgent) line 102
in method get at /Applications/Rakudo/share/perl6/site/sources/FD28A8E22DFE16B70B757D9981C7B6C25543060C (HTTP::UserAgent) line 105
in block <unit> at reporter.pl6 line 12
There is even an example in the the repo that doesn't seem to work for me.
#!/usr/bin/env perl6
use v6;
use HTTP::UserAgent;
my $ua = HTTP::UserAgent.new;
$ua.timeout = 1;
my $response = $ua.get('https://github.com');
if $response.is-success {
say $response.content;
} else {
die $response.status-line;
}
Any tips on how to connect to Github via Perl 6? I really love many aspects of the language but this type of thing is discouraging.
EDIT: I went on #perl6 irc and no one was able to reproduce this on other OSes. I got it to work on Debian. The issue seems to be with OS X
Although in alpha stage, WebServices::GitHub is perfectly serviceable. You can use it to download user information, or you can use my fork if you want to interact with issues. This program, for instance, is used to download some issues from a particular repo.

Why can't I use hPutStr after printing the result of hGetContents?

I'm new to stackoverflow so forgive me if I do something wrong. I trying to understand how a simple server would work in Haskell. I think I'm missing something very simple or fundamental about how hGetContents works.
import Network
import System.IO
main = withSocketsDo $ do
socket <- listenOn $ PortNumber 5002
(h, _, _) <- accept socket
c <- hGetContents h
-- putStrLn c -- doesn't work
-- putStrLn $ head $ lines c -- works!
-- putStrLn $ unlines $ take 2 $ lines c -- works!
-- putStrLn $ unlines $ take 3 $ lines c -- works!
-- putStrLn $ unlines $ take 6 $ lines c -- works!
putStrLn $ unlines $ take 10 $ lines c -- doesn't work
hPutStr h $ "HTTP/1.0 200 OK\r\nContent-Length: 5\r\n\r\nHello!\r\n"
hClose h
After running the program, I navigate via web browser to http://localhost:5002. The problem seems to be that, depending on how much I've parsed the handle contents, I eventually am unable to send a response. I'd like to be able to parse the request before I send a response. I've commented in the code the cases that work and the cases that don't. Hoogle says that for hGetContents (lazy) the handle is "semi-closed" as it is being read. Am I misunderstanding the laziness or should I consider the handle closed once I begin parsing its contents?
The error I get is "hPutChar: resource vanished (Broken pipe)." Thanks for any help.
I tried to reproduce your problem. For that I executed your code and send it a request using nc:
printf "1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11" | nc localhost 5002
As expected the server (code from your question) printed out first 10 lines and exited without any error. The client (nc) printed:
HTTP/1.0 200 OK
Content-Length: 5
Hello!
and also exited without an error.
So, at first I couldn't understand what's your problem, but then I tried to send a smaller request:
printf "1\n2\n3\n4\n5\n6\n" | nc localhost 5002
The server printed first 6 lines and didn't exit. The client also didn't exit, so I interrupted it with Ctrl-C and after that the server exited with "resource vanished" error.
I took some thinking and it started making sense to me. I don't understand lazy IO too good, so if my explanation isn't clear or correct it would be helpful if someone with better understanding would improve it.
Let's follow your code. First:
(h, _, _) <- accept socket
c <- hGetContents h
You open a handle and read it's content. Note that the handle is lazy and the content that you get is also lazy. When we say that something is lazy we mean that it can be passed around without being evaluated (it's often referred as 'call by name' vs 'call by value').
Now:
putStrLn $ unlines $ take 10 $ lines c
Here it is, you pass your lazy, unevaluated content to another function take 10. take 10 will try to evaluate first 10 elements of a list and return them, if there are less than 10 elements in the list it would simply return all of them. After take 10 we have putStrLn and unlines which both perfectly compatible with laziness.
Now let's say that client sends an input that is only 6 lines long and then starts waiting for the respond. Our server lazily receives the content and tries to print first 10 lines. First, take 10 function happily consumes the first 6 lines and passes them over to putStrLn . unlines, what happens then? take 10 can't just finish it's output because there is absolutely no indication that it is the end. The handle is still open, bytes still can be floating from client to server, so it just waits for more input.
This behaviour can be observed by running:
nc localhost 5002
and manually typing there 10 lines. The input would appear on server line-by-line as you type. After you will type the 10th line the server will respond with "Hello" message.
P.S: I guess that the behaviour that you described happens because you web browser sends 6 to 9 lines of something with the request.
To test, debug and analyze this kind of low level servers you should use simple tools like nc and curl instead of your web browser :)
When you initiate a lazy read on a handle, you give up the right to do anything much else with the handle until the contents string is fully forced, or you close the handle manually (at which point attempting to force any more of the contents string will lead to bad behavior or an error).
TL;DR
This is not a situation where lazy I/O is appropriate. The situations where a lazy read on a socket is appropriate can probably be counted on zero fingers. You can use regular strict I/O if you like, or conduit, or pipes, or some Haskell web framework like Yesod or Scotty or various other competitors.
Calling hGetContents puts the handle into a "semi-closed" state. You should not perform any operations on the handle after that point. You should only use the string returned from hGetContents.
Put simply, don't use lazy I/O here. You need to manually read and write individual strings one at a time, since the timing matters.
In general, lazy I/O is kind of neat, but it doesn't work well for anything much beyond toy examples.

R: How can I disable truncation of listing of package functions?

How can I list all of the results that used to occur when typing packageName<tab>, i.e. the full list offered via auto-completion? In R 2.15.0, I get the following for Matrix::<tab>:
> library(Matrix)
> Matrix::
Matrix::.__C__abIndex Matrix::.__C__atomicVector Matrix::.__C__BunchKaufman Matrix::.__C__CHMfactor Matrix::.__C__CHMsimpl
Matrix::.__C__CHMsuper Matrix::.__C__Cholesky Matrix::.__C__CholeskyFactorization Matrix::.__C__compMatrix Matrix::.__C__corMatrix
Matrix::.__C__CsparseMatrix Matrix::.__C__dCHMsimpl Matrix::.__C__dCHMsuper Matrix::.__C__ddenseMatrix Matrix::.__C__ddiMatrix
Matrix::.__C__denseLU Matrix::.__C__denseMatrix Matrix::.__C__dgCMatrix Matrix::.__C__dgeMatrix Matrix::.__C__dgRMatrix
Matrix::.__C__dgTMatrix Matrix::.__C__diagonalMatrix Matrix::.__C__dMatrix Matrix::.__C__dpoMatrix Matrix::.__C__dppMatrix
Matrix::.__C__dsCMatrix Matrix::.__C__dsparseMatrix Matrix::.__C__dsparseVector Matrix::.__C__dspMatrix Matrix::.__C__dsRMatrix
Matrix::.__C__dsTMatrix Matrix::.__C__dsyMatrix Matrix::.__C__dtCMatrix Matrix::.__C__dtpMatrix Matrix::.__C__dtrMatrix
Matrix::.__C__dtRMatrix Matrix::.__C__dtTMatrix Matrix::.__C__generalMatrix Matrix::.__C__iMatrix Matrix::.__C__index
Matrix::.__C__isparseVector Matrix::.__C__ldenseMatrix Matrix::.__C__ldiMatrix Matrix::.__C__lgCMatrix Matrix::.__C__lgeMatrix
Matrix::.__C__lgRMatrix Matrix::.__C__lgTMatrix Matrix::.__C__lMatrix Matrix::.__C__lsCMatrix Matrix::.__C__lsparseMatrix
[...truncated]
That [...truncated] message is irritating and I want to produce the full listing. Which option/flag/knob/configuration/incantation do I need to invoke in order to avoid the truncation? I have this impression that I used to see the full list, but not anymore - perhaps that was on a different OS (e.g. Linux).
I know that ls("package:Matrix") is one useful approach, but it is not the same as setting an option, and the list is different.
Unfortunately, on Windows, it looks like this behavior is hard-wired into the C code used to construct the console. So the answer seems to be that "no, you can't disable it" (at least not without modifying the sources and then recompiling R from scratch).
Here are the relevant lines from $RHOME/src/gnuwin32/console.c:
909 static void performCompletion(control c)
910 {
911 ConsoleData p = getdata(c);
912 int i, alen, alen2, max_show = 10, cursor_position = p->c - prompt_wid;
...
...
1001 if (alen > max_show)
1002 consolewrites(c, "\n[...truncated]\n");
You are correct that on some other platforms, all of the results are printed out. (I often use Emacs, for instance, and it pops all results of tab completion up in a separate buffer).
As an interesting side note, rcompgen, the backend that actually performs the tab-completion (as opposed to printing results to the console) does always find all completions. It's just that Windows doesn't then print them out for us to see.
You can verify that this happens even on Windows by typing:
library(Matrix)
Matrix::
## Then type <TAB> <TAB>
## Then type <RET>
rc.status() ## Careful not to use tab-completion to complete rc.status !
matches <- rc.status()$comps
length(matches) # -> 288
matches # -> lots of symbols starting with 'Matrix::'
For more details about about the backend, and the functions and options that control its behavior, see ?rcompgen.

Resources