I can't save the obtained model data and pictures, and it prompts one PIC error - dcgan

The following is the data display during the training process:
[ 1 Epoch:[ 0/25] [ 0/ 163] time: 1.6582, d_loss: 17.73454285, g_loss: 0.00000020
one pic error!...//i do not no why
[ 2 Epoch:[ 0/25] [ 1/ 163] time: 2.0025, d_loss: 11.87505627, g_loss: 0.00000958
code
'''
try:
samples, d_loss, g_loss = self.sess.run(
[self.sampler, self.d_loss, self.g_loss],
feed_dict={
self.z: sample_z,
self.inputs: sample_inputs,
},
)
save_images(samples, image_manifold_size(samples.shape[0]),
'./{}/train_{:08d}.png'.format(config.sample_dir, counter))
print("[Sample] d_loss: %.8f, g_loss: %.8f" % (d_loss, g_loss))
except:
print("one pic error!...")
'''

Related

Parsing an array of objects, do some math with their index

I have a large json which is actually a concatenated array of objects from several configuration files. I would like to use them to bring up a menue in a bash script. To make the menue easier to read, the json array contains special objects that would trigger a line break. In the end, the user picks the index of the array.
A simplified json looks like this:
[
{
"index" : 0,
"value" : "one a"
},
{
"index" : 3,
"value" : "two a"
},
{
"value" : ""
},
{
"index" : 2,
"value" : "three a"
},
{
"value" : ""
},
{
"index" : 1,
"value" : "one b"
},
{
"index" : 3,
"value" : "two b"
},
{
"index" : 2,
"value" : "three b"
}
]
All values with a come from the first file, all bs from the second file. The entries with an empty value are line breaks.
What I got so far, after hours of researching, is this:
jq --raw-output 'to_entries[] | "\(.key + 1). \(.value.value) (\(.value.index))"' test.json
Which produces this out of the above data:
1. one a (0)
2. two a (3)
3. (null)
4. three a (2)
5. (null)
6. one b (1)
7. two b (3)
8. three b (2)
Now the user would type 8 to work with the three b.
What I need, however, is this:
1. one a (0)
2. two a (3)
3. three a (2)
4. one b (1)
5. two b (3)
6. three b (2)
So the user would need to type 6 to do the same.
Any idea welcome!
Using foreach to count would be one way:
foreach .[] as {$index, $value} (0;
if $value != "" then . + 1 else . end;
if $value != "" then "\(.). \($value) (\($index))" else "" end
)
1. one a (0)
2. two a (3)
3. three a (2)
4. one b (1)
5. two b (3)
6. three b (2)
Demo

Incrementing a value progressively on each item

So i have this Grafana dashboard that i'm making up using jq and different files. The problem i end up with is that when you export the json produced by Grafana, it will export it the way it sees it currently. Example:
[
{
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 22
},
"panels": []
},
{
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 43
},
"panels": []
},
{
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 17
},
"panels": []
}
]
But the problem is that the grid positions need to be properly incremented (the Y's) so that when you reload the Grafana dashboards, the panels nested under row panels get set to their proper locations. If you have a sub panel that has a gridPos.y that is lower than the row panel's gridPos.y then it will appear in a weird location.
I tried using reduce and foreach but i'm not super good with these constructs yet. For example, i tried this:
[
1 as $currentY |
foreach .[] as $item (
[];
(. + [$item * {"gridPos": {"y": ($currentY + 1)}}]);
. | last
)
]
But i can't figure out how to increment $currentY within the loop to achieve proper incrementation. The objective would be to nest a second foreach/reduce to continue setting and incrementing $currentY in all panels and sub panels.
Can you help? Thanks!
Note: I know i should use reduce when using .|last, this was just the last try. Don't point that out, i want guidance on how to increment $currentY in the current approach.
With your existing approach as such, you need to reference the y field in each $item processed and increment its value, rather than the predefined value of $currentY, i.e.
[
1 as $currentY |
foreach .[] as $item (
[];
(. + [$item * {"gridPos": {"y": ($currentY + $item.gridPos.y )}}]);
last
)
]
which again could be written as
[
1 as $currentY |
foreach .[] as $item (
[];
(. + [ $item | .gridPos.y += $currentY ]);
last
)
]
which again could be written with a simple walk expression
1 as $currentY |
walk ( if type == "object" and has("gridPos") then .gridPos.y += $currentY else . end )

How to export the constructed dictionary from Newsmap (Quanteda)

I have trained a newsmap model in the Newsmap package for quanteda in R and am trying to export the large dictionary it constructed based on my corpus (not the seed dictionary).
I have tried this code, but it only gives me the 10 most associated terms per country in a list format, which I also fail to extract in order to form a dictionary object I can use in R.
Dict <-coef(model)
I would really appreciate any and all help!
You only need to extract the names of the vectors with desired number of words passed to n.
> quanteda::dictionary(lapply(coef(model, n = 1000), FUN = names))
Dictionary object with 226 key entries.
- [bi]:
- burundi, burundi's, bujumbura, burundian, nkurunziza, uprona, msd, nduwimana, hutus, tutsi, radebe, drcongo, rapporteur, elderly, mushikiwabo, generation, kayumba, faustin, hutu, olga [ ... and 980 more ]
- [dj]:
- djibouti, djibouti's, djiboutian, western-led, pretty, photo, watkins, ask, entebbe, westerners, mujahideen, salvation, osprey, persistent, horn, afdb, donors, ismael, nevis, grenade [ ... and 980 more ]
- [er]:
- eritrea, eritreans, eritrean, keetharuth, issaias, eritrea's, binnie, sheila, somaliland, catania, mandeb, brutal, sicily's, lana, horn, lampedusa, aman, afdb, donors, monitoring [ ... and 980 more ]
- [et]:
- ethiopia, ethiopian, addis, ababa, addis, ababa, hailemariam, desalegn, ethiopians, maasho, ethiopia's, mandeb, igad, dibaba, genzebe, mesfin, bekele, spla, shrikesh, laxmidas [ ... and 980 more ]
- [ke]:
- kenya, kenyan, nairobi, nairobi, uhuru, lamu, mombasa, mpeketoni, kenyans, kws, nairobi's, akwiri, ruto, westgate, kenyatta's, mombasa, makaburi, kenyatta, kenya's, ol [ ... and 980 more ]
- [km]:
- comoros, mazen, emiratis, oil-rich, canterbury, lahiya, shoukri, gender, wadia, lombok, brisbane's, entire, christiana, blahodatne, everest's, culiacan, kamensk-shakhtinsky, protestants, pk-5, parwan [ ... and 980 more ]
[ reached max_nkey ... 220 more keys ]

Cassandra collection tombstones

I have created a table with a collection. Inserted a record and took sstabledump of it and seeing there is range tombstone for it in the sstable. Does this tombstone ever get removed? Also when I run sstablemetadata on the only sstable, it shows "Estimated droppable tombstones" as 0.5", Similarly it shows one record with epoch time as insert time for - "Estimated tombstone drop times: 1548384720: 1". Does it mean that when I do sstablemetadata on a table having collections, the estimated droppable tombstone ratio and drop times values are not true and dependable values due to collection/list range tombstones?
CREATE TABLE ks.nmtest (
reservation_id text,
order_id text,
c1 int,
order_details map<text, text>,
PRIMARY KEY (reservation_id, order_id)
) WITH CLUSTERING ORDER BY (order_id ASC)
user#cqlsh:ks> insert into nmtest (reservation_id , order_id , c1, order_details ) values('3','3',3,{'key':'value'});
user#cqlsh:ks> select * from nmtest ;
reservation_id | order_id | c1 | order_details
----------------+----------+----+------------------
3 | 3 | 3 | {'key': 'value'}
(1 rows)
[root#localhost nmtest-e1302500201d11e983bb693c02c04c62]# sstabledump mc-5-big-Data.db
WARN 02:52:19,596 memtable_cleanup_threshold has been deprecated and should be removed from cassandra.yaml
[
{
"partition" : {
"key" : [ "3" ],
"position" : 0
},
"rows" : [
{
"type" : "row",
"position" : 41,
"clustering" : [ "3" ],
"liveness_info" : { "tstamp" : "2019-01-25T02:51:13.574409Z" },
"cells" : [
{ "name" : "c1", "value" : 3 },
{ "name" : "order_details", "deletion_info" : { "marked_deleted" : "2019-01-25T02:51:13.574408Z", "local_delete_time" : "2019-01-25T02:51:13Z" } },
{ "name" : "order_details", "path" : [ "key" ], "value" : "value" }
]
}
]
}
SSTable: /data/data/ks/nmtest-e1302500201d11e983bb693c02c04c62/mc-5-big
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Bloom Filter FP chance: 0.010000
Minimum timestamp: 1548384673574408
Maximum timestamp: 1548384673574409
SSTable min local deletion time: 1548384673
SSTable max local deletion time: 2147483647
Compressor: org.apache.cassandra.io.compress.LZ4Compressor
Compression ratio: 1.0714285714285714
TTL min: 0
TTL max: 0
First token: -155496620801056360 (key=3)
Last token: -155496620801056360 (key=3)
minClustringValues: [3]
maxClustringValues: [3]
Estimated droppable tombstones: 0.5
SSTable Level: 0
Repaired at: 0
Replay positions covered: {CommitLogPosition(segmentId=1548382769966, position=6243201)=CommitLogPosition(segmentId=1548382769966, position=6433666)}
totalColumnsSet: 2
totalRows: 1
Estimated tombstone drop times:
1548384720: 1
Another quuestion was on the nodetool tablestats output - what does slice refer to in cassandra?
Average live cells per slice (last five minutes): 1.0
Maximum live cells per slice (last five minutes): 1
Average tombstones per slice (last five minutes): 1.0
Maximum tombstones per slice (last five minutes): 1
Dropped Mutations: 0
sstablemetadata does not have the information about your table that is not held within the sstable as it is not guaranteed to be run on system that has Cassandra running, and even if it was its very complex to be able to know how to pull the schema information from it.
Since the gc_grace_seconds is a table parameter and not in the metadata it defaults to assuming a 0 gc grace so the droppable times listed in that histogram will be more a histogram of the tombstone creation times by default. If you know your gc grace you can add it as a -g parameter to your sstablemetadata call. like:
sstablemetadata -g 864000 mc-5-big-Data.db
see http://cassandra.apache.org/doc/latest/tools/sstable/sstablemetadata.html for information on the tools output.
With collections it's just normal range tombstone with all that it entails. They are used to prevent the requirement of a read-before-write when overwriting the value of a multicell collection.

graphite\carbon cant understand how to feed them

so I wanted to start playing with it and test it, so i put this config:
storage-schemas.conf:
[short2]
pattern = ^short2\.
retentions = 10s:1m
storage-aggregation.conf
[sum]
pattern = \.count$
xFilesFactor = 0
aggregationMethod = sum
what I think that my config say:
get data every 10 seconds and save it to 1 minutes so total of 10 points will be saved
now if i go to
http://localhost/render/?target=short2.sum&format=json&from=-1h
I see many data with null values a lot more than 10,
ok so I give up on that, than I said let's try to feed it data once every 10 seconds, if i do
echo "short2.sum 22 `date +%s`" | nc -q0 127.0.0.1 2003
wait 11 seconds
echo "short2.sum 23 `date +%s`" | nc -q0 127.0.0.1 2003
now looking at the api I can see only the last point get registerd like:
[
null,
1464781920
],
[
null,
1464781980
],
[
null,
1464782040
],
[
null,
1464782100
],
[
23,
1464782160
],
now if I send it another point (a lot after 10 seconds)
echo "short2.sum 24 `date +%s`" | nc -q0 127.0.0.1 2003
this is what I get:
[
null,
1464781920
],
[
null,
1464781980
],
[
null,
1464782040
],
[
null,
1464782100
],
[
24,
1464782160
],
only once in a couple of tries I will see them count as new but they just overwriting each other instead of acting like new data
Actually:
[short2]
pattern = ^short2\.
retentions = 10s:1m
means: all metrics starts with short2. keep for 1 minute with 10 second resolution (each datapoint represents 10s). It also means if there are not defined other storage schemas for short2., it will have value only for 1 (last) minute.
http://graphite.readthedocs.io/en/latest/config-carbon.html#storage-schemas-conf

Resources