I have a jq function that is adding a large float number to a simple math function.
Does anyone know what is causing this large float number? My colleague runs the same function on his laptop and doesn't get the large float. He only gets two decimal floats. Seems like a local things.
Running on Mac Mojave(yeah need to update)
Large Floats
cat VIAC.json | jq '.underlyingPrice - (.callExpDateMap[][][] | select(.putCall == "CALL").bid)'
41.540000000000006
41.790000000000006
Actual numbers
cat VIAC.json | jq '.underlyingPrice'
42.34
cat VIAC.json | jq '.callExpDateMap[][][] | select(.putCall == "CALL").bid'
0.8
0.55
Expected values
41.54
41.79
I know I can use awk and other functions to get what I need, but would like to understand why this is happening.
Thanks
===
Seems like it's related to python3
Python 3.8.6 (v3.8.6:db455296be, Sep 23 2020, 13:31:39)
[Clang 6.0 (clang-600.0.57)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> 42.34 - 0.8
41.540000000000006
>>>
Anyone know why python3 does this?
Found this for the problem
https://floating-point-gui.de/
Looks like https://floating-point-gui.de/ explains it pretty nicely.
Seems like I'll have to use awk to get the output i want.
Related
I'm not sure if there is already a command like this existing, but what about a command like that in a code language:
do this
do that
<point2>
if (something){
GOTO ('point1')
}
do this
do that
<point1>
do this
do that
if (something){
go to ('point2')
}
a command which just leads the program to a point forward or backward in the code
i know you can do this with if clauses and functions and have the same effect
otherwise with this command you can portray code in blocks:
_____________ <-----
| start motor | | Go to command
| if failure -------
|_____________|
|
|
\/
Drive
My questions:
do we need this command? , is it useful in languages like java or php or else? and why is it unset in java? Could it be upgraded or made better and how? is it enough for not using loops anymore? Or has a goto command a major downside? Maybe in compiling or so its performance is bad... ----why dont i use it or find it in any tutorial when it could be a standard command like loops... why????
I'm thankful for a nice discussion about this command and for not writing how many grammar mistakes I made ...
"a command which just leads the program to a point forward or backward in the code" <-- it is called GOTO command. Different programming language may implement it differently.
"nice discussion about this command" <--- After your research, mind sharing which part of the reading materials/reference/code that you don't understand or can't be execute? A sample code and screenshot may help too.. (:
I have a requirement to add 10 days to current date and assign it to a variable. But I am getting error:
date: illegal option -- d
This is what I tried:
$> NEW_expration_DATE=$(date -d "+10 days")
Result:
date: illegal option -- d
Usage: date [-u] [+Field Descriptors]
Try this:
NEW_expration_DATE=$(gdate -d "+10 days")
It looks like you are using a POSIX shell, and that there is no way to do simple date arithmetic in here.
I found a guy who explains it and who coded something to substract dates. You may be able to adapt it for your case: https://unix.stackexchange.com/a/7220/162444
Good luck!
You can check the system with "unmane -a" and do a fine search, for example in AIX can use to get yesterday:
YESTERDAY=`TZ=aaa24 date +%Y%m%d`
I'm pretty new here so thank you in advance for the help. I'm trying to do some analysis of the entire Bitcoin transaction chain. In order to do that, I'm trying to create 2 tables
1) A full list of all Bitcoin addresses and their balance, i.e.,:
| ID | Address | Balance |
-------------------------------
| 1 | 7d4kExk... | 32 |
| 2 | 9Eckjes... | 0 |
| . | ... | ... |
2) A record of the number of transactions that have ever occurred between any two addresses in the Bitcoin network
| ID | Sender | Receiver | Transactions |
--------------------------------------------------
| 1 | 7d4kExk... | klDk39D... | 2 |
| 2 | 9Eckjes... | 7d4kExk... | 3 |
| . | ... | ... | .. |
To do this I've written a (probably very inefficient) script in R that loops through every block and scrapes blockexplorer.com to compile the tables. I've tried running it a couple of times so far but I'm running into two main issues
1 - It's very slow... I can imagine it's going to take at least a week at the rate that it's going
2 - I haven't been able to run it for more than a day or two without it hanging. It seems to just freeze RStudio.
I'd really appreaciate your help in two areas:
1 - Is there a better way to do this in R to make the code run significantly faster?
2 - Should I stop using R altogether for this and try a different approach?
Thanks in advance for the help! Please see below for the relevant chunks of code I'm using
url_start <- "http://blockexplorer.com/b/"
url_end <- ""
readUrl <- function(url) {
table <- try(readHTMLTable(url)[[1]])
if(inherits(table,"try-error")){
message(paste("URL does not seem to exist:", url))
errors <- errors + 1
return(NA)
} else {
processed <- processed + 1
return(table)
}
}
block_loop <- function (end, start = 0) {
...
addr_row <- 1 #starting row to fill out table
links_row <- 1 #starting row to fill out table
for (i in start:end) {
print(paste0("Reading block: ",i))
url <- paste(url_start,i,url_end, sep = "")
table <- readUrl(url)
if(is.na(table)){ next }
....
There are very close to 250,000 blocks on the site you mentioned (at least, 260,000 gives a 404). Curling from my connection (1 MB/s down) gives an average speed of about half a second. Try it yourself from the command line (just copy and paste) to see what you get:
curl -s -w "%{time_total}\n" -o /dev/null http://blockexplorer.com/b/220000
I'll assume your requests are about as fast as mine. Half a second times 250,000 is 125,000 seconds, or a day and a half. This is the absolute best you can get using any methods because you have to request the page.
Now, after doing an install.packages("XML"), I saw that running readHTMLTable(http://blockexplorer.com/b/220000) takes about five seconds on average. Five seconds times 250,000 is 1.25 million seconds which is about two weeks. So your estimates were correct; this is really, really slow. For reference, I'm running a 2011 MacBook Pro with a 2.2 GHz Intel Core i7 and 8GB of memory (1333 MHz).
Next, table merges in R are quite slow. Assuming 100 records per table row (seems about average) you'll have 25 million rows, and some of these rows have a kilobyte of data in them. Assuming you can fit this table in memory, concatenating tables will be a problem.
The solution to these problems that I'm most familiar with is to use Python instead of R, BeautifulSoup4 instead of readHTMLTable, and Pandas to replace R's dataframe. BeautifulSoup is fast (install lxml, a parser written in C) and easy to use, and Pandas is very quick too. Its dataframe class is modeled after R's, so you probably can work with it just fine. If you need something to request URLs and return the HTML for BeautifulSoup to parse, I'd suggest Requests. It's lean and simple, and the documentation is good. All of these are pip installable.
If you still run into problems the only thing I can think of is to get maybe 1% of the data in memory at a time, statistically reduce it, and move on to the next 1%. If you're on a machine similar to mine, you might not have another option.
I am interested in TCP/IP communication from the Unix server to the Pure Data. I have it realized using sockets on the Unix server side, and netclient on the Pure Data side. I exploited the chat-server tutorial for this (3.Networking > 10.chat_client.pd).
Now the problem lies that the server is streaming the data out as a "string" message delimited with ";"
My question is, is there a way to send something other than string message to Pure Data, like byte-stream or serialized number stream? Can Pure Data receive such messages?
Since string takes too many bytes to transfer, for example number "1024;" is already 5 bytes, while such an integer number is just 4 bytes.
UPDATE: For everyone that stumbles upon this post in search for the answer.
Apparently [netclient] on the Pure Data side cannot receive nothing else than ; delimited messages.
So the solution for the problem posed above:
My question is, is there a way to send something other than string message to Pure Data, like byte-stream or serialized number stream? Can Pure Data receive such messages?
The solution is to use [tcpclient], it can receive byte-stream data.
Now my question is, how do I get four compact numbers to work with?
Now I have a series of bytes, at least in the correct order.
From my UNIX server I am sending a structure
typedef struct {
int var_code;
int sample_time;
int hr;
float hs;
} phy_data;
Sample data might be 2 1000000 51 2000.56
When received and printed in Pure Data I get output like this:
: 0 0 0 2 0 10 114 26 0 0 0 51 0 16 242 78
You can notice number 2 and number 51 clearly, I guess the others are correct as well.
How can I get these numbers back to a usable format?
Maybe some manipulation with [bytes2any] and [route], but I haven't been able to extract the data with it?
here's an outline of what you have to do:
repackage the bytelist to small messages of the correct size for the various types.
since all your elements are 4 byte long, you simply repackage your list (or bytestream, as TCP/IP doesn't guarantee to deliver your 16 bytes as a single list, but could also decide to break it into a list of arbitrary length) to a number of 4 atom lists.
the most stable way, would probably be to 1st serialize the list (check the "serializer" example in the [list] help) and than reassamble that list to 4 elements.
if you can use externals like zexy you could use [repack 4] for that.
if you trust [netclient] to output your messages as complete lists, you could simply use a large [unpack ....] and 4 [pack]s
interpret the raw data for each sublist
integers is rather simple, floats are way more complicated
integers:
|
[unpack 0 0 0 0]
| | | |
[<< 8] | | |
| | | |
[+ ] | |
| | |
[<< 8] | |
| | |
[+ ] |
| |
[<< 8] |
| |
[+ ]
|
floats are left as an exercise to the user :-)
the real solution to your problem would be to use a well-defined application-layer protocol, rather than brew your own.
the most widespread protocol in use for applications like Pd, is certainly OSC.
in order to decode the raw OSC-bytes into Pd-messages, use [unpackOSC] (part of the "mrpeach" library; on Debian, you install it via the pd-osc package)
on the "server" side, you can use liblo for encoding data and sending it.
note
be aware that since OSC is packet-based, you will need a packetizing mechanism for stream-based protocols like TCP/IP. as with OSC-1.2, this should be SLIP. liblo should already take care of this. check the patches accompanying [unpackOSC] for how to do this within Pd.
all this is not needed if you are using a UDP as a transport.
I'm working on a patch that plays samples from a piano, which works in xcode to build an piano app for ipad. I'm trying to add an adsr to create sustain, but I can't seem to get it working. Could someone point me in the right direction? Thanks!
Patch:
https://docs.google.com/file/d/0B4-qHDgzbDB3VUlwM09FSEowZWM/edit
The ADSR is just an evelope which you are using to multiply the sound output with. However it is meant to be on a temporal axis together with the trigger of the sound. When I look at your patch I notice another thing: Why are you reloading the samples into the arrays every time you trigger them? The arrays should be filled on startup of the app, like this:
[loadbang]
|
[read -resize c1.wav c1Array(
|
[soundfiler]
Later, when you actually just want to play back, you do
[r c1]
|
[t b]
|
[tabplay~ c1Array]
|
[throw~]
and at one central point in your patch you can have
[catch~]
|
[dac~]
(add the main voulme there). Notice there are no connections between the three parts!