I have rdkafka producer written the following way, where localhost:9095 is tunneled to a digital ocean server in Germany with a kafka broker, whereas i, the localhost, am in New York.
I am noticing that at around ~150 miliseconds of thread::sleep(Duration::from_millis(150)) the last few of my messages stop being acknowledged by the callback (messages 8 and 9 for example),as if they didn't arrive. I know they did because i see the Kafka topic.
I'm wondering if my guess is correct that it is solely the latency between NY and Germany that dictates how long the broker takes to respond or if there is some config max.response.wait.ms setting i can set for acknowledgements to arrive sooner.
Of course if i increase the sleep time to ~200ms and higher i can see each message logged loud and clear. If i decrease sleep to 50ms the producer drops before any messages are sent and they never arrive to the topic... right?
let producer :ThreadedProducer< ProducerLogger> = ClientConfig::new()
.set("bootstrap.servers", "localhost:9095")
.create_with_context(ProducerLogger{})
.expect("Failed to create producer");
for i in 1..10{
let mut data = [0u8; 8];
rand::thread_rng().fill_bytes(&mut data);
println!("Sedndg msg with data {:x?}", data);
producer.send(
BaseRecord::to("rt-test")
.key(&format!("key-{}",i))
.payload(&format!("payload-{}",i))
)
.expect("couldn't send message");
// producer.flush(Duration::from_secs(1));
thread::sleep(Duration::from_millis(150)) // <--- Parameter to play with
}
Sedndg msg with data [da, 44, 1c, f0, 6, 1d, da, 9b]
Sedndg msg with data [37, 82, b2, 58, e2, 91, 40, 33]
Sedndg msg with data [8d, f8, 5c, df, 68, 12, a1, 26]
Sedndg msg with data [bd, 13, 7c, 81, 62, 5c, 93, 75]
Produced message key-1 successfully. Offset: 193. Partition :0
Produced message key-2 successfully. Offset: 194. Partition :0
Produced message key-3 successfully. Offset: 195. Partition :0
Produced message key-4 successfully. Offset: 196. Partition :0
Sedndg msg with data [d, 9d, 1c, 73, a7, 9f, b4, 2]
Produced message key-5 successfully. Offset: 197. Partition :0
Sedndg msg with data [c9, 1a, 3b, 8c, 31, cc, 84, f4]
Produced message key-6 successfully. Offset: 198. Partition :0
Sedndg msg with data [8a, 33, 26, 92, 2a, bf, d1, 7d]
Produced message key-7 successfully. Offset: 199. Partition :0
Sedndg msg with data [78, eb, e2, 41, 8d, b9, 29, 68]
Related
i have tried to run a simple task using airflow bash operator but keep getting stuck on my DAG never stop running, it stays like green forever without success or fail, when i check the logs i see something like this. Thanks in advance for your time and answers
**`your text`**airflow-scheduler_1 | [SQL: INSERT INTO task_fail (task_id, dag_id, execution_date, start_date, end_date, duration) VALUES (%(task_id)s, %(dag_id)s, %(execution_date)s, %(start_date)s, %(end_date)s, %(duration)s) RETURNING task_fail.id]
airflow-scheduler_1 | [parameters: {'task_id': 'first_task', 'dag_id': 'LocalInjestionDag', 'execution_date': datetime.datetime(2023, 1, 20, 8, 0, tzinfo=Timezone('UTC')), 'start_date': datetime.datetime(2023, 1, 23, 3, 35, 27, 332954, tzinfo=Timezone('UTC')), 'end_date': datetime.datetime(2023, 1, 23, 3, 35, 27, 710572, tzinfo=Timezone('UTC')), 'duration': 0}]
postgres_1 | 2023-01-23 03:55:59.712 UTC [4336] ERROR: column "execution_date" of relation "task_fail" does not exist at character 41"""
I have tried with execution_datetime , using xcom_push and creating functions with xcom and changing to python operator but everything still fall back to same error
Most recent AAPL 10-k XBRL Instance Document for example:
doc <- "https://www.sec.gov/Archives/edgar/data/320193/000032019319000119/0000320193-19-000119-index.htm"
Run xbrlDoAll and xbrl_get_statements from XBRL and finstr packages, respectively
get_xbrl_doc <- xbrlDoAll(doc)
statements <- xbrl_get_statements(get_xbrl_doc)
Error: Each row of output must be identified by a unique combination of keys.
Keys are shared for 34 rows:
* 6, 8
* 5, 7, 9
* 49, 51
* 48, 50
* 55, 57
* 54, 56
* 11, 13
* 10, 12
* 25, 27
* 24, 26
* 59, 61
* 58, 60
* 29, 31
* 28, 30
* 63, 64, 66
* 62, 65
This sequence works perfectly up until 2019 when Apple switched to "Extracted XBRL Instance Document" from "XBRL Instance Document". Has anybody found a work around?
Not only have they switched that, but now the fact ID brings some kind of 32-digit key that seems to have no connections with any of the other files in the taxonomy. This is probably related to the iXBRL process and I've seen it with another company as well.
However, if the problem is just the word "Extracted", then you just need to change the script, but I'm guessing this is not the case and the solution relates to finding out what those 32-digit keys mean.
I'm not an R user and I don't use Finstr, but I guess we have the same problem. So, the answer to your question would be: you need to write your own parser now or wait until someone finishes theirs.
I am using Vegeta to make some stress test but I am having some trouble while generating a json report. Running the following command I am able to see text results:
vegeta attack -targets="./vegeta_sagemaker_True.txt" -rate=10 -duration=2s | vegeta report -output="attack.json" -type=text
Requests [total, rate] 20, 10.52
Duration [total, attack, wait] 2.403464884s, 1.901136s, 502.328884ms
Latencies [mean, 50, 95, 99, max] 945.385864ms, 984.768025ms, 1.368113304s, 1.424427549s, 1.424427549s
Bytes In [total, mean] 5919, 295.95
Bytes Out [total, mean] 7104, 355.20
Success [ratio] 95.00%
Status Codes [code:count] 200:19 400:1
Error Set:
400
When I run the same command changing -type-text to -type=json I receive really weird numbers ad they don't make sense for me:
{
"latencies": {
"total": 19853536952,
"mean": 992676847,
"50th": 972074984,
"95th": 1438787021,
"99th": 1636579198,
"max": 1636579198
},
"bytes_in": {
"total": 5919,
"mean": 295.95
},
"bytes_out": {
"total": 7104,
"mean": 355.2
},
"earliest": "2019-04-24T14:32:23.099072+02:00",
"latest": "2019-04-24T14:32:25.00025+02:00",
"end": "2019-04-24T14:32:25.761337546+02:00",
"duration": 1901178000,
"wait": 761087546,
"requests": 20,
"rate": 10.519793517492838,
"success": 0.95,
"status_codes": {
"200": 19,
"400": 1
},
"errors": [
"400 "
]
}
Does anyone know why this should be happening?
Thanks!
These numbers are nanoseconds -- the internal representation of time.Duration in Go.
For example, the latencies.mean in the JSON is 992676847, which means 992676847 nanoseconds, that is 992676847/1000/1000 = 992.676847ms.
Actually, in vegeta, if you declare type as text (-type=text), it will use NewTextReporter, and print the time.Duration as a user-friendly string. If you declare type as json (-type=json), it will use NewJSONReporter and return time.Duration's internal representation:
A Duration represents the elapsed time between two instants as an int64 nanosecond count. The representation limits the largest representable duration to approximately 290 years.
I have an issue with Graphite, specifically with carbon-cache. At some point I had it running. now when coming back after a few weeks I tried to start graphite again. The django-webapp runs fine but it seems I have an issue with the carbon-cache backend. Graphite is installed in /opt/graphite and I run /opt/graphite/bin/carbon-cache.py start. This is the error I get:
root#stfutm01:/opt/graphite/bin# ./carbon-cache.py start
Starting carbon-cache (instance a)
Traceback (most recent call last):
File "./carbon-cache.py", line 30, in <module>
run_twistd_plugin(__file__)
File "/opt/graphite/lib/carbon/util.py", line 92, in run_twistd_plugin
runApp(config)
File "/usr/local/lib/python2.7/dist-packages/twisted/scripts/twistd.py", line 23, in runApp
_SomeApplicationRunner(config).run()
File "/usr/local/lib/python2.7/dist-packages/twisted/application/app.py", line 386, in run
self.application = self.createOrGetApplication()
File "/usr/local/lib/python2.7/dist-packages/twisted/application/app.py", line 446, in createOrGetApplication
ser = plg.makeService(self.config.subOptions)
File "/opt/graphite/lib/twisted/plugins/carbon_cache_plugin.py", line 21, in makeService
return service.createCacheService(options)
File "/opt/graphite/lib/carbon/service.py", line 127, in createCacheService
from carbon.writer import WriterService
File "/opt/graphite/lib/carbon/writer.py", line 34, in <module>
schemas = loadStorageSchemas()
File "/opt/graphite/lib/carbon/storage.py", line 123, in loadStorageSchemas
archives = [ Archive.fromString(s) for s in retentions ]
File "/opt/graphite/lib/carbon/storage.py", line 107, in fromString
(secondsPerPoint, points) = whisper.parseRetentionDef(retentionDef)
File "/usr/local/lib/python2.7/dist-packages/whisper.py", line 76, in parseRetentionDef
(precision, points) = retentionDef.strip().split(':')
ValueError: need more than 1 value to unpack
I see that it as an issue with the split retentionDef.strip().split(':'). My storage schema config file (/opt/graphite/conf/storage-schemas.conf) looks like:
[stats]
priority = 110
pattern = ^stats\..*
retentions = 10s:6h,1m:7d,10m:1y
[ts3]
priority = 100
pattern = ^skarp\.ts3\..*
retentions = 60s:1y,1h,:5y
Any hints where I should looking? Or does anybody know what I'm missing here?
I think the problem is the [ts3] rentions. "The retentions line can specify multiple retentions. Each retention of frequency:history is separated by a comma."
In ts3 it appears to be 3 retentions (comma-delimited), with the second not specifying a history and the last not specifying a frequency.
retentions = 60s:1y,1h,:5y
I think you may have meant:
retentions = 60s:1y,1h:5y
Which would be 60 second data for 1 year and 1 hour data for 5 years after that.
When I am trying to use sqlite3_exec to insert new data into the database, it returns error 14 (SQLITE_CANTOPEN). But when I am using sqlite3_prepare_v2 to select, it works fine. Is there an issue with permissions? How to fix it?
sprintf(temp, "INSERT INTO owned (pid, oname, okey, ohp, oatt, odef) VALUES (%d, %c%s%c, %c%s%c, %d, %d, %d);", pid, 34, poname, 34, 34, passkey, 34, sqlite3_column_int(res3,0), sqlite3_column_int(res3,1), sqlite3_column_int(res3,2));
error = sqlite3_exec(conn, temp, 0, 0, 0);