'code': 'SIGERROR' Validate signature error: * is signed by ** but it is not contained of permission - python-3.6

I tried to create a trx transaction on shasta network and passing correct private key using tronweb for python but it's giving this error
{'code': 'SIGERROR',
'txid': '44212bea170d07e2c83c5a4c1ba96e6617165474401f8af1e2a7f3c6f09257d6',
'message': 'Validate signature error: bc9d036b9f7f2a1d1b9688aafef323484b4bc0836faedb52168b4d6f47c780824840d8a9c3904863453b69e875f62106743aecf9d2c912d4911ca00839632aea1c is signed by TNBWAv3eDZ3W9c61etU34DEy2X1kx5v6m5 but it is not contained of permission.'}
my script is as follows
from tronapi import Tron
import json
full_node = 'https://api.shasta.trongrid.io'
solidity_node = 'https://api.shasta.trongrid.io'
event_server = 'https://api.shasta.trongrid.io'
tron = Tron(full_node=full_node, solidity_node=solidity_node, event_server=event_server)
tron.private_key = private_key
tron.default_address = sender_address
txn = tron.trx.send_transaction(to=receiver_address, amount=amount, options={
'from': sender_address,
'message': memo_text
})

Related

Receiving a timeout error when trying to use Heroku Data Redis with Stackexchange.Redis

Receiving the following error when trying to retrieve data from Heroku Data Redis on a .Net Core app hosted on Heroku using Docker, the cache works locally but I'm getting this error message when deployed to Heroku:
StackExchange.Redis.RedisTimeoutException: The timeout was reached before the message could be written to the output buffer, and it was not sent, command=HMGET, timeout: 60000, inst: 0, qu: 1, qs: 0, aw: False, bw: CheckingForTimeout, serverEndpoint: [amazon aws host:port], mc: 1/1/0, mgr: 10 of 10 available, clientName: d3d51ae0-3bd0-4434-8767-2028ec9f6c41(SE.Redis-v2.6.66.47313), IOCP: (Busy=0,Free=1000,Min=10,Max=1000), WORKER: (Busy=0,Free=32767,Min=10,Max=32767), POOL: (Threads=5,QueuedItems=0,CompletedItems=986), v: 2.6.66.47313 (Please take a look at this article for some common client-side issues that can cause timeouts: https://stackexchange.github.io/StackExchange.Redis/Timeouts)
Stack Trace:
2023-01-30T16:49:20.014624+00:00 app[web.1]: Stack Trace:
2023-01-30T16:49:20.014625+00:00 app[web.1]:
2023-01-30T16:49:20.014625+00:00 app[web.1]: at Microsoft.Extensions.Caching.StackExchangeRedis.RedisExtensions.HashMemberGetAsync(IDatabase cache, String key, String[] members)
2023-01-30T16:49:20.014626+00:00 app[web.1]: at Microsoft.Extensions.Caching.StackExchangeRedis.RedisCache.GetAndRefreshAsync(String key, Boolean getData, CancellationToken token)
2023-01-30T16:49:20.014626+00:00 app[web.1]: at Microsoft.Extensions.Caching.StackExchangeRedis.RedisCache.GetAsync(String key, CancellationToken token)
2023-01-30T16:49:20.014626+00:00 app[web.1]: at SudokuCollective.Cache.CacheService.GetByUserNameWithCacheAsync(IUsersRepository`1 repo, IDistributedCache cache, String cacheKey, DateTime expiration, ICacheKeys keys, String username, String license, IResult result) in /src/SudokuCollective.Api/SudokuCollective.Cache/CacheService.cs:line 1029
2023-01-30T16:49:20.014627+00:00 app[web.1]: at SudokuCollective.Data.Services.AuthenticateService.AuthenticateAsync(ILoginRequest request) in /src/SudokuCollective.Api/SudokuCollective.Data/Services/AuthenticateService.cs:line 89
2023-01-30T16:49:20.014627+00:00 app[web.1]: at SudokuCollective.Api.Controllers.V1.LoginController.PostAsync(LoginRequest request) in /src/SudokuCollective.Api/SudokuCollective.Api/Controllers/V1/LoginController.cs:line 80, License: , AppId: 0, RequestorId: 0
2023-01-30T16:49:20.014638+00:00 app[web.1]: StackExchange.Redis.RedisTimeoutException: The timeout was reached before the message could be written to the output buffer, and it was not sent, command=HMGET, timeout: 60000, inst: 0, qu: 0, qs: 0, aw: False, bw: CheckingForTimeout, serverEndpoint: ec2-3-221-27-134.compute-1.amazonaws.com:19570, mc: 1/1/0, mgr: 10 of 10 available, clientName: d3d51ae0-3bd0-4434-8767-2028ec9f6c41(SE.Redis-v2.6.66.47313), IOCP: (Busy=0,Free=1000,Min=10,Max=1000), WORKER: (Busy=1,Free=32766,Min=10,Max=32767), POOL: (Threads=5,QueuedItems=0,CompletedItems=1277), v: 2.6.66.47313 (Please take a look at this article for some common client-side issues that can cause timeouts: https://stackexchange.github.io/StackExchange.Redis/Timeouts)
2023-01-30T16:49:20.014638+00:00 app[web.1]: at Microsoft.Extensions.Caching.StackExchangeRedis.RedisExtensions.HashMemberGetAsync(IDatabase cache, String key, String[] members)
2023-01-30T16:49:20.014640+00:00 app[web.1]: at Microsoft.Extensions.Caching.StackExchangeRedis.RedisCache.GetAndRefreshAsync(String key, Boolean getData, CancellationToken token)
2023-01-30T16:49:20.014642+00:00 app[web.1]: at Microsoft.Extensions.Caching.StackExchangeRedis.RedisCache.GetAsync(String key, CancellationToken token)
2023-01-30T16:49:20.014643+00:00 app[web.1]: at SudokuCollective.Cache.CacheService.GetByUserNameWithCacheAsync(IUsersRepository`1 repo, IDistributedCache cache, String cacheKey, DateTime expiration, ICacheKeys keys, String username, String license, IResult result) in /src/SudokuCollective.Api/SudokuCollective.Cache/CacheService.cs:line 1029
2023-01-30T16:49:20.014644+00:00 app[web.1]: at SudokuCollective.Data.Services.AuthenticateService.AuthenticateAsync(ILoginRequest request) in /src/SudokuCollective.Api/SudokuCollective.Data/Services/AuthenticateService.cs:line 89
2023-01-30T16:49:20.014655+00:00 app[web.1]: at SudokuCollective.Api.Controllers.V1.LoginController.PostAsync(LoginRequest request) in /src/SudokuCollective.Api/SudokuCollective.Api/Controllers/V1/LoginController.cs:line 80
The cache is configured as follows in the Startup.cs file:
string cacheConnectionString = "";
ConfigurationOptions options;
if (!_environment.IsStaging())
{
cacheConnectionString = Configuration.GetConnectionString("CacheConnection");
options = ConfigurationOptions.Parse(Configuration.GetConnectionString("CacheConnection"));
}
else
{
options = GetHerokuRedisConfigurationOptions();
cacheConnectionString = options.ToString();
}
services.AddStackExchangeRedisCache(redisOptions => {
redisOptions.Configuration = cacheConnectionString;
redisOptions.ConfigurationOptions = options;
});
The GetHerokuRedisConfigurationOptions method configures the options as follows:
private static ConfigurationOptions GetHerokuRedisConfigurationOptions()
{
// Get the connection string from the ENV variables
var redisUrlString = Environment.GetEnvironmentVariable("REDIS_URL");
// parse the connection string
var redisUri = new Uri(redisUrlString);
var userInfo = redisUri.UserInfo.Split(':');
var config = new ConfigurationOptions
{
EndPoints = { { redisUri.Host, redisUri.Port } },
Password = userInfo[1],
AbortOnConnectFail = false,
ConnectRetry = 3,
ConnectTimeout = 60000,
SyncTimeout = 60000,
AsyncTimeout = 60000,
Ssl = true,
SslProtocols = System.Security.Authentication.SslProtocols.Tls12,
};
// Add the tls certificate
config.CertificateSelection += delegate
{
var cert = new X509Certificate2("[path to pfx file]");
return cert;
};
return config;
}
I suspected this could be a timeout issue so in the GetHerokuRedisConfigurationOptions method I set the timeouts to 60000 milliseconds but the issue persists. How can I register the IDistributedCache service so it can successfully retrieve data without the above timeout?
I was able to solve the issue. I believe docker wasn't saving the pfx certificate because I was getting authorization errors connecting to the ssl connection for Heroku Data Redis. I wasn't able to review the file system that Docker produced on Heroku but I logged information confirming the pfx file wasn't found. I think this is a security feature in Docker as you would want to limit dissemination of such files.
In either case, since it is a self signed certificate you just need to disable peer verification. You can review the Heroku documentation as to how you do this for supported languages, in this case since we're running .Net Core through docker this is how I configured it:
In Startup.cs:
ConfigurationOptions options;
string cacheConnectionString = "";
if (!_environment.IsStaging())
{
options = ConfigurationOptions.Parse(Configuration.GetConnectionString("CacheConnection"));
cacheConnectionString = Configuration.GetConnectionString("CacheConnection");
}
else
{
options = GetHerokuRedisConfigurationOptions();
cacheConnectionString = options.ToString();
}
services.AddSingleton<Lazy<IConnectionMultiplexer>>(sp =>
new Lazy<IConnectionMultiplexer>(() =>
{
return ConnectionMultiplexer.Connect(options);
}));
services.AddStackExchangeRedisCache(redisOptions => {
redisOptions.InstanceName = "SudokuCollective";
redisOptions.Configuration = cacheConnectionString;
redisOptions.ConfigurationOptions = options;
redisOptions.ConnectionMultiplexerFactory = () =>
{
var serviceProvider = services.BuildServiceProvider();
Lazy<IConnectionMultiplexer> connection = serviceProvider.GetService<Lazy<IConnectionMultiplexer>>();
return Task.FromResult(connection.Value);
};
});
Then within Startup.cs I define a static method to initialize Heroku Redis configuration options:
private static ConfigurationOptions GetHerokuRedisConfigurationOptions()
{
// Get the connection string from the ENV variables
var redisUrlString = Environment.GetEnvironmentVariable("REDIS_URL");
// parse the connection string
var redisUri = new Uri(redisUrlString);
var userInfo = redisUri.UserInfo.Split(':');
var config = new ConfigurationOptions
{
EndPoints = { { redisUri.Host, redisUri.Port } },
Password = userInfo[1],
AbortOnConnectFail = true,
ConnectRetry = 3,
Ssl = true,
SslProtocols = System.Security.Authentication.SslProtocols.Tls12,
};
// Disable peer certificate verification
config.CertificateValidation += delegate { return true; };
return config;
}
I believe that this line:
config.CertificateValidation += delegate { return true; };
Is the equivalent of the following Java documentation provided by Heroku:
#Configuration
class AppConfig {
#Bean
public LettuceClientConfigurationBuilderCustomizer lettuceClientConfigurationBuilderCustomizer() {
return clientConfigurationBuilder -> {
if (clientConfigurationBuilder.build().isUseSsl()) {
clientConfigurationBuilder.useSsl().disablePeerVerification();
}
};
}
}
It basically disables certification verification. It now runs as expected.

pytest with httpx.AsyncClient cannot find newly created database records

I am trying to setup pytest with httpx.AsyncClient and sqlalchemy AsyncSession with FastAPI. Everything practically mimics the tests in FastAPI Fullstack repo, except for async stuff.
No issues with CRUD unit tests. The issue arises when running API tests using AsyncClient from httpx lib.
The issue is, any request made by client only has access to the users (in my case) created before initializing (setting up) the client fixture.
My pytest conftest.py setup is like this:
from typing import Dict, Generator, Callable
import asyncio
from fastapi import FastAPI
import pytest
# from sqlalchemy.orm import Session
from sqlalchemy.ext.asyncio import AsyncSession
from httpx import AsyncClient
import os
import warnings
import sqlalchemy as sa
from alembic.config import Config
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.ext.asyncio import create_async_engine
from sqlalchemy.orm import sessionmaker
async def get_test_session() -> Generator:
test_engine = create_async_engine(
settings.SQLALCHEMY_DATABASE_URI + '_test',
echo=False,
)
# expire_on_commit=False will prevent attributes from being expired
# after commit.
async_sess = sessionmaker(
test_engine, expire_on_commit=False, class_=AsyncSession
)
async with async_sess() as sess, sess.begin():
yield sess
#pytest.fixture(scope="session")
async def async_session() -> Generator:
test_engine = create_async_engine(
settings.SQLALCHEMY_DATABASE_URI + '_test',
echo=False,
pool_size=20, max_overflow=0
)
# expire_on_commit=False will prevent attributes from being expired
# after commit.
async_sess = sessionmaker(
test_engine, expire_on_commit=False, class_=AsyncSession
)
yield async_sess
#pytest.fixture(scope="session")
async def insert_initial_data(async_session:Callable):
async with async_session() as session, session.begin():
# insert first superuser - basic CRUD ops to insert data in test db
await insert_first_superuser(session)
# insert test.superuser#example.com
await insert_first_test_user(session)
# inserts test.user#example.com
#pytest.fixture(scope='session')
def app(insert_initial_data) -> FastAPI:
return FastAPI()
#pytest.fixture(scope='session')
async def client(app: FastAPI) -> Generator:
from app.api.deps import get_session
app.dependency_overrides[get_session] = get_test_session
async with AsyncClient(
app=app, base_url="http://test",
) as ac:
yield ac
# reset dependencies
app.dependency_overrides = {}
So in this case, only the superuser test.superuser#example.com and normal user test.user#example.com are available during running API tests. e.g., code below is able to fetch the access token just fine:
async def authentication_token_from_email(
client: AsyncClient, session: AsyncSession,
) -> Dict[str, str]:
"""
Return a valid token for the user with given email.
"""
email = 'test.user#example.com'
password = 'test.user.password'
user = await crud.user.get_by_email(session, email=email)
assert user is not None
data = {"username": email, "password": password}
response = await client.post(f"{settings.API_V1_STR}/auth/access-token",
data=data)
auth_token = response.cookies.get('access_token')
assert auth_token is not None
return auth_token
but, the modified code below doesn't - here I try to insert new user, and then log in to get access token.
async def authentication_token_from_email(
client: AsyncClient, session: AsyncSession,
) -> Dict[str, str]:
"""
Return a valid token for the user with given email.
If the user doesn't exist it is created first.
"""
email = random_email()
password = random_lower_string()
user = await crud.user.get_by_email(session, email=email)
if not user:
user_in_create = UserCreate(email=email,
password=password)
user = await crud.user.create(session, obj_in=user_in_create)
else:
user_in_update = UserUpdate(password=password)
user = await crud.user.update(session, db_obj=user, obj_in=user_in_update)
assert user is not None
# works fine up to this point, user inserted successfully
# now try to send http request to fetch token, and user is not found in the db
data = {"username": email, "password": password}
response = await client.post(f"{settings.API_V1_STR}/auth/access-token",
data=data)
auth_token = response.cookies.get('access_token')
# returns None.
return auth_token
What is going on here ? Appreciate any help!
Turns out all I needed to do is, for reason I do not understand, is to define the FastAPI dependency override function inside the client fixture:
before
async def get_test_session() -> Generator:
test_engine = create_async_engine(
settings.SQLALCHEMY_DATABASE_URI + '_test',
echo=False,
)
# expire_on_commit=False will prevent attributes from being expired
# after commit.
async_sess = sessionmaker(
test_engine, expire_on_commit=False, class_=AsyncSession
)
async with async_sess() as sess, sess.begin():
yield sess
#pytest.fixture(scope='session')
async def client(app: FastAPI) -> Generator:
from app.api.deps import get_session
app.dependency_overrides[get_session] = get_test_session
async with AsyncClient(
app=app, base_url="http://test",
) as ac:
yield ac
# reset dependencies
app.dependency_overrides = {}
after
#pytest.fixture(scope="function")
async def session(async_session) -> Generator:
async with async_session() as sess, sess.begin():
yield sess
#pytest.fixture
async def client(app: FastAPI, session:AsyncSession) -> Generator:
from app.api.deps import get_session
# this needs to be defined inside this fixture
# this is generate that yields session retrieved from `session` fixture
def get_sess():
yield session
app.dependency_overrides[get_session] = get_sess
async with AsyncClient(
app=app, base_url="http://test",
) as ac:
yield ac
app.dependency_overrides = {}
I'd appreciate any explanation of this behavior. Thanks!

get_current_user doesn't work (OAuth2PasswordBearer problems)

This is actually the first time it doesn't work, I mean I've practiced this before, but now I have no idea what's wrong.
So I am trying to implement basic function get_current_user for FastAPI , but somehow it doesn't work.
When I try in swagger Authorization works fine, but endpoint with current user simply doesn't work.
So this is part that belongs to endpoint file:
router = APIRouter(prefix='/api/v1/users')
router1 = APIRouter()
oauth2_scheme = OAuth2PasswordBearer(tokenUrl='/api-token-auth/')
#router1.post('/api-token-auth/')
async def auth(form: OAuth2PasswordRequestForm = Depends(), db: Session = Depends(get_db)):
user = await utils.get_user_by_username(form.username, db) # type: User
if not user:
raise HTTPException(status_code=400, detail="Incorrect username or password")
if not utils.validate_password(form.password, user.hashed_password):
raise HTTPException(status_code=400, detail="Incorrect username or password")
return await utils.create_token(user.id, db)
async def get_current_user(token: str = Depends(oauth2_scheme), db: Session = Depends(get_db)):
print(token)
user = await utils.get_user_by_token(token, db)
if not user:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid authentication credentials",
headers={"WWW-Authenticate": "Bearer"},
)
return user
#router.get("/me", response_model=DisplayUser)
async def read_users_me(current_user: User = Depends(get_current_user)):
return current_user
and this is function that creates token (I have checked and it is 1000% works and returns string):
async def create_token(user_id: int, db: Session):
"""Token generation"""
letters = string.ascii_lowercase
token = ''.join(random.choice(letters) for _ in range(25))
created_token = Token(
expires=datetime.now() + timedelta(weeks=2),
user_id=user_id,
token=token
)
db.add(created_token)
db.commit()
db.refresh(created_token)
token = AuthUser.from_orm(created_token)
return token.token
But when I print(token) in get_current_user function it prints undefined . And I dunno why. Am I using dependency wrong or something?
Thanks in advance!
Since it prints undefined it seems like the frontend is expecting the response in a different format (since undefined is what using an undefined object key in Javascript as a key will result in).
The OAuth2 response should have the token under access_token by default:
access_token (required) The access token string as issued by the authorization server.
token_type (required) The type of token this is, typically just the string “bearer”.
Example response from the above link:
{
"access_token":"MTQ0NjJkZmQ5OTM2NDE1ZTZjNGZmZjI3",
"token_type":"bearer",
"expires_in":3600,
"refresh_token":"IwOGYzYTlmM2YxOTQ5MGE3YmNmMDFkNTVk",
"scope":"create"
}
In your "create_token(user.id, db)" ensure the returned token contains these two values.
{
"access_token":"",
"token_type":"bearer"
}

Read telegram channel messages

So I need to read all new messages of one specific channel that I'm in (not as an admin). I searched for different client apis (.NET, PHP, nodejs) but none of them helped.
Do you have any idea how I could do this?
Thanks!
Here is how i did it :
Install Telegram https://github.com/vysheng/tg
Install Cli wraper https://github.com/luckydonald/pytg
from pytg import Telegram
from pytg.utils import coroutine
tg = Telegram( telegram="./tg/bin/telegram-cli", pubkey_file="./tg/tg-server.pub")
receiver = tg.receiver
QUIT = False
#coroutine
def main_loop():
try:
while not QUIT:
msg = (yield) # it waits until it got a message, stored now in msg.
if msg.text is None:
continue
print(msg.event)
print(msg.text)
except GeneratorExit:
pass
except KeyboardInterrupt:
pass
else:
pass
receiver.start()
receiver.message(main_loop())
NodeJS version :
const path = require('path');
const TelegramAPI = require('tg-cli-node');
const config = {
telegram_cli_path: path.join(__dirname, 'tg/bin/telegram-cli'), //path to tg-cli (see https://github.com/vysheng/tg)
telegram_cli_socket_path: path.join(__dirname, 'socket'), // path for socket file
server_publickey_path: path.join(__dirname, 'tg/tg-server.pub'), // path to server key (traditionally, in %tg_cli_path%/tg-server.pub)
}
const Client = new TelegramAPI(config)
Client.connect(connection => {
connection.on('message', message => {
console.log('message : ', message)
console.log('message event : ', message.event)
console.log('message text : ', message.text)
console.log('message from :', message.from)
})
connection.on('error', e => {
console.log('Error from Telegram API:', e)
})
connection.on('disconnect', () => {
console.log('Disconnected from Telegram API')
})
})
First step is to add the telegram bot as channel admin if not you can't read channel messages!!!

Query graphite index.json for a specific sub-tree

I'm querying Graphite's index.json to get all the metrics. Is there an option to pass a root metric and get only a sub-tree? Something like:
http://<my.graphite>/metrics/index.json?query="my.metric.subtree"
That is not supported.
What you can do however is call /metrics/find recursively (call it again for each branch encountered)
Something like this:
#!/usr/bin/python
from __future__ import print_function
import requests
import json
import argparse
try:
from Queue import Queue
except:
from queue import Queue
from threading import Thread, Lock
import sys
import unicodedata
outLock = Lock()
def output(msg):
with outLock:
print(msg)
sys.stdout.flush()
class Walker(Thread):
def __init__(self, queue, url, user=None, password=None, seriesFrom=None, depth=None):
Thread.__init__(self)
self.queue = queue
self.url = url
self.user = user
self.password = password
self.seriesFrom = seriesFrom
self.depth = depth
def run(self):
while True:
branch = self.queue.get()
try:
branch[0].encode('ascii')
except Exception as e:
with outLock:
sys.stderr.write('found branch with invalid characters: ')
sys.stderr.write(unicodedata.normalize('NFKD', branch[0]).encode('utf-8','xmlcharrefreplace'))
sys.stderr.write('\n')
else:
if self.depth is not None and branch[1] == self.depth:
output(branch[0])
else:
self.walk(branch[0], branch[1])
self.queue.task_done()
def walk(self, prefix, depth):
payload = {
"query": (prefix + ".*") if prefix else '*',
"format": "treejson"
}
if self.seriesFrom:
payload['from']=self.seriesFrom
auth = None
if self.user is not None:
auth = (self.user, self.password)
r = requests.get(
self.url + '/metrics/find',
params=payload,
auth=auth,
)
if r.status_code != 200:
sys.stderr.write(r.text+'\n')
raise Exception(
'Error walking finding series: branch={branch} reason={reason}'
.format(branch=unicodedata.normalize('NFKD', prefix).encode('ascii','replace'), reason=r.reason)
)
metrics = r.json()
for metric in metrics:
try:
if metric['leaf']:
output(metric['id'])
else:
self.queue.put((metric['id'], depth+1))
except Exception as e:
output(metric)
raise e
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("--url", help="Graphite URL", required=True)
parser.add_argument("--prefix", help="Metrics prefix", required=False, default='')
parser.add_argument("--user", help="Basic Auth username", required=False)
parser.add_argument("--password", help="Basic Auth password", required=False)
parser.add_argument("--concurrency", help="concurrency", default=8, required=False, type=int)
parser.add_argument("--from", dest='seriesFrom', help="only get series that have been active since this time", required=False)
parser.add_argument("--depth", type=int, help="maximum depth to traverse. If set, the branches at the depth will be printed", required=False)
args = parser.parse_args()
url = args.url
prefix = args.prefix
user = args.user
password = args.password
concurrency = args.concurrency
seriesFrom = args.seriesFrom
depth = args.depth
queue = Queue()
for x in range(concurrency):
worker = Walker(queue, url, user, password, seriesFrom, depth)
worker.daemon = True
worker.start()
queue.put((prefix, 0))
queue.join()
Note: this code comes from: https://github.com/grafana/cloud-graphite-scripts/blob/master/query/walk_metrics.py

Resources