decoding magnet uri from coppersurfer.tk scrape file - decode

i try to decode the hash to build magnet uri from scrape file (downloaded from coppersurfer.tk site)
after splitting the huge file
i try to decode the hash file
hash = hashlib.sha1(f).hexdigest() # hash info
and gotten list such as
6768033e216468247bd031a0a2d9876d79818f8f : {'downloaded': 2374, 'complete': 0, 'incomplete': 75}
e5eaaedf19d4602337c71b041a669b9d70bda764 : {'downloaded': 0, 'complete': 0, 'incomplete': 1}
a2e43672a55dcda5d6b1cbdf356da4f6a3e6178d : {'downloaded': 0, 'complete': 0, 'incomplete': 1}
ea01e99635aa17b7d9803c3004210202b1e9e612 : {'downloaded': 1, 'complete': 0, 'incomplete': 2}
b9c569eb1820a1a67633757fc96801ed0c8f3281 : {'downloaded': 1085, 'complete': 1, 'incomplete': 0}
92c9de8c9a40405f56aa5c4d55c22720a208207f : {'downloaded': 0, 'complete': 0, 'incomplete': 1}
a398de47b654426f4ef39054c8bbfe9f0348cd74 : {'downloaded': 304, 'complete': 1, 'incomplete': 0}
11a9f43eead2164042c87bf75fa72d885d4afe86 : {'downloaded': 0, 'complete': 0, 'incomplete': 1}
254b675173ccb75085a0e25a1da6c1ec2c5846a0 : {'downloaded': 0, 'complete': 0, 'incomplete': 1}
but when i combined it to create magnet uri such as
magnet:?xt=urn:btih:6768033e216468247bd031a0a2d9876d79818f8f
and try to download it in torrent client, it doesnt seems to work ( i try multiple other hashes with same result)
do you know what i need to do in order to decode the hash correctly?
thank you for your help

The scrape file full_scrape_not_a_tracker.tar.gz contains a bencoded full scrape and it seems from the examples, that it has been decoded correctly.
The conversion to magnet link is also done correctly.
However, a search for 6768033e216468247bd031a0a2d9876d79818f8f turns out that:
6768033e216468247bd031a0a2d9876d79818f8f = sha1( 0x0000000000000000000000000000000000000000 )
i.e it's not a real info_hash, so it's likely that the full scrape contains some bogus info_hashes.
It's probably better to test torrents where there are seeders,
i.e those there the 'complete' value is not zero.
So continue to test hashes and eventually one will turn out to be a real torrent.
Also, adding a tracker to the magnet link, will probably speed up the lookup a bit.
magnet:?xt=urn:btih:6768033e216468247bd031a0a2d9876d79818f8f&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969

The scrape file should already contain the hashes for each torrent in their raw (20byte) representation, no additional hashing is required. All you need to do is to convert them to hex representation.

Related

Dynamodb stream kinesis - Incomplete hash range found between

I have kinesis stream from DynamoDB. And I am processing it through aws KCL sdk v1.14.0. I see below occasional errors in the logs. I also observe that startingHashKey is always 1 and endingHashKey is 0 for all shards in the dynamodb lease table. Any clue why is this happening?
Incomplete hash range found between {
"leaseKey" : "shardId-00000001601992582645-697387df",
"leaseOwner" : "eu01-stg01-vendor-service-74b4cd66b6-6hcfb:29dafc2f-c87f-4c70-b210-d38db7ebfc87",
"leaseCounter" : 4080,
"concurrencyToken" : null,
"lastCounterIncrementNanos" : null,
"checkpoint" : {
"sequenceNumber" : "47515300000000010794885359",
"subSequenceNumber" : 0,
"shardEnd" : false
},
"pendingCheckpoint" : null,
"ownerSwitchesSinceCheckpoint" : 0,
"parentShardIds" : [ ],
"childShardIds" : [ ],
"hashKeyRange" : {
"startingHashKey" : 0,
"endingHashKey" : 1
}
} and {
"leaseKey" : "shardId-00000001602005637902-6358a08a",
"leaseOwner" : "eu01-stg01-vendor-service-74b4cd66b6-6hcfb:29dafc2f-c87f-4c70-b210-d38db7ebfc87",
"leaseCounter" : 397,
"concurrencyToken" : null,
"lastCounterIncrementNanos" : null,
"checkpoint" : {
"sequenceNumber" : "TRIM_HORIZON",
"subSequenceNumber" : 0,
"shardEnd" : false
},
"pendingCheckpoint" : null,
"ownerSwitchesSinceCheckpoint" : 0,
"parentShardIds" : [ "shardId-00000001601992582736-2c3fc0ff" ],
"childShardIds" : [ ],
"hashKeyRange" : {
"startingHashKey" : 0,
"endingHashKey" : 1
}
}
I also get this error, but the stream seems to work properly. (I am using it to monitor the data change).
Then I dig the source code of this error log, find the part code is added on Jun 22, 2020,
This is the source code: https://github.com/awslabs/amazon-kinesis-client/blob/6fbfc21ad7d6a2722491806a411d168304167f7f/src/main/java/com/amazonaws/services/kinesis/clientlibrary/lib/worker/PeriodicShardSyncManager.java#L298
This is the blame: https://github.com/awslabs/amazon-kinesis-client/blame/6fbfc21ad7d6a2722491806a411d168304167f7f/src/main/java/com/amazonaws/services/kinesis/clientlibrary/lib/worker/PeriodicShardSyncManager.java#L298
This is the commit: https://github.com/awslabs/amazon-kinesis-client/commit/3a88a60a4ee7bb8f969b36cb79e7665a7395b6ec
it is only in the latest version: 1.14.0
So I go back one version to use 1.13.3 then no this error.
I have submitted an issue on the GitHub to see what the reason for the error. https://github.com/awslabs/amazon-kinesis-client/issues/758

StackExchange.Redis.RedisTimeoutException: Timeout performing EVAL

I am using Redis cache for storing session from asp.net core application, but at some point, an exception has occurred as follow
StackExchange.Redis.RedisTimeoutException: Timeout performing EVAL, inst: 2, queue: 16, qu: 0, qs: 15, qc: 1, wr: 0, wq: 0, in: 501, ar: 0, clientName: , serverEndpoint: 104.211.115.54:6379, keyHashSlot: 4394 (Please take a look at this article for some common client-side issues that can cause timeouts: http://stackexchange.github.io/StackExchange.Redis/Timeouts) at StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message message, ResultProcessor`1 processor, ServerEndPoint server) in c:\code\StackExchange.Redis\StackExchange.Redis\StackExchange\Redis\ConnectionMultiplexer.cs:line 2120 at StackExchange.Redis.RedisBase.ExecuteSync[T](Message message, ResultProcessor`1 processor, ServerEndPoint server) in c:\code\StackExchange.Redis\StackExchange.Redis\StackExchange\Redis\RedisBase.cs:line 81 at StackExchange.Redis.RedisDatabase.ScriptEvaluate(String script, RedisKey[] keys, RedisValue[] values, CommandFlags flags) in c:\code\StackExchange.Redis\StackExchange.Redis\StackExchange\Redis\RedisDatabase.cs:line 1052 at Microsoft.Extensions.Caching.Redis.RedisExtensions.HashMemberGet(IDatabase cache, String key, String[] members) at Microsoft.Extensions.Caching.Redis.RedisCache.GetAndRefresh(String key, Boolean getData) at Microsoft.AspNetCore.Session.DistributedSession.Load()
I have added extra connection properties in connetionstringconnectTimeout=10000,syncTimeout=2000,connectRetry=3
What additional setting required for resolve this exception?

Contiki node A sending data to another node B, I want node A and B to send data through border router to a server running on Linux

I am able to send data from node to a server code running on Linux through border router. I achieved that using https://github.com/contiki-os/contiki/blob/master/examples/udp-ipv6/udp-client.c example code from Contiki. I am running a python code to receive those data on Linux board, see this Linux userspace code to communicate between Linux board and each node running contiki udp sender example code. Let's call a node NODE_A, the second node NODE_B, and the Linux board as the NODE_C. NODE_A and NODE_B data are reaching to NODE_C, I also want NODE_A and NODE_B to talk to each other. How can I make NODE_A and NODE_B talk to each other? Thanks!
On NODE_A edit udp_client.c example something like this
where address of NODE_B : fd00::abcd:aaaa:bbbb,
NODE_B: fd00:dddd:aaaa:bbbb
NODE_C : fd00::1
uip_ipaddr_t NODE_B;
uip_ipaddr_t NODE_C;
uip_ip6addr(&NODE_C, 0xfd00, 0, 0, 0, 0, 0, 0, 1);
uip_ip6addr(&NODE_B, 0xfd00, 0, 0, 0, 0, 0xabcd, 0xaaaa, 0xbbbb);
/* new connection with remote host */
client_conn_NODE_B = udp_new(&NODE_C, UIP_HTONS(3000), NULL);
udp_bind(client_conn_NODE_B, UIP_HTONS(3001));
/* new connection with remote host */
client_conn_NODE_B = udp_new(&NODE_B, UIP_HTONS(3002), NULL);
udp_bind(client_conn_NODE_B, UIP_HTONS(3003));
on NODE_B
uip_ipaddr_t NODE_A;
uip_ipaddr_t NODE_C;
uip_ip6addr(&NODE_C, 0xfd00, 0, 0, 0, 0, 0, 0, 1);
uip_ip6addr(&NODE_A, 0xfd00, 0, 0, 0, 0, 0xdddd, 0xaaaa, 0xbbbb);
/* new connection with remote host */
client_conn_NODE_B = udp_new(&NODE_C, UIP_HTONS(3000), NULL);
udp_bind(client_conn_NODE_B, UIP_HTONS(3001));
/* new connection with remote host */
client_conn_NODE_B = udp_new(&NODE_B, UIP_HTONS(3003), NULL);
udp_bind(client_conn_NODE_B, UIP_HTONS(3002));
NODE_C is my Linux board I wrote a test code something like this
import socket, struct
UDP_LOCAL_IP = 'aaaa::1'
UDP_LOCAL_PORT = 5678
try:
socket_rx = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM)
socket_rx.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
socket_rx.bind((UDP_LOCAL_IP, UDP_LOCAL_PORT))
except Exception:
print "ERROR: Server Port Binding Failed"
print 'UDP server ready: %s'% UDP_LOCAL_PORT
print
while True:
data, addr = socket_rx.recvfrom(1024)
print "address : ", addr
print "received message: ", data
print "\n"
socket_rx.sendto("Hello from serevr\n", (UDP_REMOTE_IP, UDP_REMOTE_PORT))

Riak search on CRDT data types - memory backend

I am using riak2.2.3, and trying to search in a map bucket type, but nothing is ever returned.
I've configured a bucket type "dist_cache" on the memory backend:
# riak-admin bucket-type status dist_cache
dist_cache is active
active: true
allow_mult: true
backend: <<"memory_mult">>
basic_quorum: false
big_vclock: 50
chash_keyfun: {riak_core_util,chash_std_keyfun}
claimant: 'riak#127.0.0.1'
datatype: map
dvv_enabled: true
dw: quorum
last_write_wins: false
linkfun: {modfun,riak_kv_wm_link_walker,mapreduce_linkfun}
n_val: 3
notfound_ok: true
old_vclock: 86400
postcommit: []
pr: 0
precommit: []
pw: 0
r: quorum
rw: quorum
search_index: <<"expirable_token">>
small_vclock: 50
w: quorum
young_vclock: 20
I've then enable search in /etc/riak/:
search = on
Then I have configured an index, with the default schema and associated it with the bucket type (see above).
I can successfully store and retrieve values, using keys, in that bucket. I have stored 3 values in registers: binary data, integer (timestamp) and a string:
[
{{"attrs", :register}, <<131, 97, 111>>},
{{"iat_i", :register}, "1540923453"},
{{"test_s", :register}, "paul"}
]
(displayed after formatting from Elixir shell, using Elixir's Riak library.)
However, nothing is found when I try searching these values:
iex(74)> :riakc_pb_socket.search(pid, "expirable_token", "iat_i:[0 TO *]")
{:ok, {:search_results, [], 0.0, 0}}
iex(75)> :riakc_pb_socket.search(pid, "expirable_token", "iat_i:1540923453")
{:ok, {:search_results, [], 0.0, 0}}
iex(76)> :riakc_pb_socket.search(pid, "expirable_token", "test_s:paul")
{:ok, {:search_results, [], 0.0, 0}}
iex(77)> :riakc_pb_socket.search(pid, "expirable_token", "test_s:*")
{:ok, {:search_results, [], 0.0, 0}}
In addition, /var/log/riak/solr.log doesn't show any error message for these requests.
Am I missing something?
I needed to remove a few options from the java startup options, but now it seems java is up and running, and solr.log does show error message when trying malformed request.
EDIT:
After trying #vempo's solutions:
I have suffixed the field with _register, however it still does not work. Here is how the field is:
iex(12)> APISexAuthBearerCacheRiak.get("ddd", opts)
[
{{"attrs", :register}, <<131, 98, 0, 0, 1, 188>>},
{{"iat_i", :register}, "1542217847"},
{{"test_flag", :flag}, true},
{{"test_register", :register}, "pierre"}
]
but the search request still returns no result:
iex(15)> :riakc_pb_socket.search(pid, "expirable_token", "test_register:*")
{:ok, {:search_results, [], 0.0, 0}}
iex(16)> :riakc_pb_socket.search(pid, "expirable_token", "test_register:pierre")
{:ok, {:search_results, [], 0.0, 0}}
iex(17)> :riakc_pb_socket.search(pid, "expirable_token", "test_register:*")
{:ok, {:search_results, [], 0.0, 0}}
iex(18)> :riakc_pb_socket.search(pid, "expirable_token", "test_flag:true")
{:ok, {:search_results, [], 0.0, 0}}
iex(19)> :riakc_pb_socket.search(pid, "expirable_token", "test_flag:*")
Still know output in /var/log/riak/solr.log, and index seems correctly setup:
iex(14)> :riakc_pb_socket.list_search_indexes(pid)
{:ok,
[
[index: "expirable_token", schema: "_yz_default", n_val: 3],
[index: "famous", schema: "_yz_default", n_val: 3]
]}
For searching within maps the rules are different. According to Searching with Data Types, there are four schemas for maps, one for each embedded type:
*_flag
*_counter
*_register
*_set
So that in your case you should be searching attrs_register, iat_i_register, and test_s_register.
As a side note, the suffixes _s and _i are probably redundant. They are used by the default schema to decide the type of a regular field, but are useless with embedded datatypes).
UPDATE
And to sum up the rules:
a flag field named test will be indexed as test_flag (query test_flag:*)
a register field named test will be indexed as test_register (query test_register:*)
a counter field named test will be indexed as test_counter (query test_counter:*)
a set field named test will be indexed as test_set (query test_set:*)
This is nicely shown in the table in Searching with Data Types: Embedded Schemas.
See also the definition of dynamic fields for embedded datatypes in the default schema, section <!-- Riak datatypes embedded fields -->.

UnicodeWarning While Using Dictionaries

I've been playing with Python 2.7 for a while, and I'm now tryin to make my own Encryption/Decryption algorithm.
I'm trying to make it support non-Ascii characters.
So this is a part of the dictionnary :
... u'\xe6': '1101100', 'i': '0001000', u'\xea': '1100001', 'm': '0001100', u'\xee': '1100111', 'q': '0010000', 'u': '0010100', u'\xf6': '1110010', 'y': '0011000', '}': '1001111'}
But when I try to convert, by exemple, "é" into binairy, doing
base64 = encrypt[i]
where as encrypt is the name of the dic and i = u"é"
I get this error :
Warning (from warnings module):
File "D:\DeskTop 2\Programs\Projects\4.py", line 174
base64 = encrypt[i]
UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
Traceback (most recent call last):
File "D:\DeskTop 2\Programs\Projects\4.py", line 197, in
main()
File "D:\DeskTop 2\Programs\Projects\4.py", line 196, in main
decryption(key, encrypt, decrypt)
File "D:\DeskTop 2\Programs\Projects\4.py", line 174, in decryption
base64 = encrypt[i]
KeyError: '\xf1'
Also, I did start with
# -*- coding: utf-8-*-
Alright, sorry for the useless post.
I found the fix. Basically, I did :
for i in user_input:
base64 = encrypt[i]
but i would be like \0xe
I added
j = i.decode("latin-1")
so j = u"\0xe"
And now it works :D

Resources