Telegram "API development tools" limits - telegram

I try to use my application (with TLSharp) but suddenly by using TelegramClient .SendCodeRequestAsync function, I get This Exception :
"Flood prevention. Telegram now requires your program to do requests
again only after 84894 seconds have passed (TimeToWait property). If
you think the culprit of this problem may lie in TLSharp's
implementation, open a Github issue "
after waiting for 84894 sec, It show this message again.
(I wait and try several times but messages doesn't differ:( )
Someone told me that its Telegram limits. Is it right?
Do you Have better idea to Send message/file to a telegram account?

It might be a late answer but can be used as a reference. the first problem is that Telegram APIs don't let each phone number to send code request more than 5 times a day. the second problem is shared session file that you use for TelegramClient by default. so you should create a custom session manager to separate each phone number session in a separate dat file.
public class CustomSessionStore : ISessionStore
{
public void Save(Session session)
{
var dir = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Sessions");
if (!Directory.Exists(dir))
{
Directory.CreateDirectory(dir);
}
var file = Path.Combine(dir, "{0}.dat");
using (FileStream fileStream = new FileStream(string.Format(file, (object)session.SessionUserId), FileMode.OpenOrCreate))
{
byte[] bytes = session.ToBytes();
fileStream.Write(bytes, 0, bytes.Length);
}
}
public Session Load(string sessionUserId)
{
var dir = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Sessions");
if (!Directory.Exists(dir))
{
Directory.CreateDirectory(dir);
}
var file = Path.Combine(dir, "{0}.dat");
string path = string.Format(file, (object)sessionUserId);
if (!File.Exists(path))
return (Session)null;
var buffer = File.ReadAllBytes(path);
return Session.FromBytes(buffer, this, sessionUserId);
}
}
then create your TelegramClient like this:
var client = new TelegramClient(apiId, apiHash, new CustomSessionStore(), phoneNumber);

I guess you are closing and starting your application many times or repeating this method. After 10 times the telegram API makes you wait for about 24 hours to prevent flood.
It's a Telegram limit, my advice: Wait 2-3 minutes between calling SendCodeRequestAsync()

Related

Stream on the fly zipped files to client via rest endpoint

I am trying to stream on the fly zipped files but memory consumption is high. For example, to zip total file size of 2.8 GB is taking nearly 5 GB of processor memory.
[Route("zip")]
public class ZipController : ControllerBase
{
private readonly HttpClient _httpClient;
public ZipController()
{
_httpClient = new HttpClient();
}
[HttpPost]
public async Task Zip([FromBody] JsonToZipInput input)
{
Response.ContentType = "application/octet-stream";
Response.Headers.Add($"Content-Disposition", $"attachment; filename=\"{input.FileName}\"");
using var zipArchive =
new ZipArchive(Response.BodyWriter.AsStream(), ZipArchiveMode.Create);
foreach (var (key, value) in input.FilePathsToUrls)
{
var zipEntry = zipArchive.CreateEntry(key, CompressionLevel.Optimal);
await using var zipStream = zipEntry.Open();
await using var stream = await _httpClient.GetStreamAsync(value);
await stream.CopyToAsync(zipStream);
}
}
}
I believe you should be able to call Response.StartAsync:
[HttpPost]
public async Task Zip([FromBody] JsonToZipInput input)
{
Response.ContentType = "application/octet-stream";
Response.Headers.Add($"Content-Disposition", $"attachment; filename=\"{input.FileName}\"");
await Response.StartAsync();
using var zipArchive = new ZipArchive(Response.BodyWriter.AsStream(), ZipArchiveMode.Create);
foreach (var (key, value) in input.FilePathsToUrls)
{
var zipEntry = zipArchive.CreateEntry(key, CompressionLevel.Optimal);
await using var zipStream = zipEntry.Open();
await using var stream = await _httpClient.GetStreamAsync(value);
await stream.CopyToAsync(zipStream);
}
}
StartAsync should start the response being sent. Note that neither the response headers nor the status code can be modified once StartAsync is called.
In particular, this means that your exception handling will be different. Previously, an exception (e.g., from a bad URL in the request) would cause an exceptional status code (i.e., 500). With a streaming response, any exceptions after StartAsync cannot change the status code; it's already been sent. Instead, it will appear to the client as though the connection was terminated without a clean close. Complicating this a bit further, this behavior is not uncommon for web servers to do in the successful case, so clients may not complain - they would just end up with truncated (invalid) zip files. (In the case of streaming zips, the "file table" in the zip is sent last instead of first).
So, this should work, but I also recommend:
Ensure your exception logging works for exceptions after StartAsync. There is no way to return error details to the client, so you must rely on logging.
If you control the client, test out this new error situation, and see if you can detect it. If it's not detectable using that client, then ensure your code validates the zip.
Nothing about the zip file format should require a large amount of memory for this use case. It's essential all the files in order, with a table at the end describing the zip structure, and file offsets. This makes it possible to stream very efficiently without using much memory at all.
You may not need to write this yourself, ZipStreamer is a micro service you host that does exactly this (disclosure, I'm the author). It's designed to solve the exact problems you are hitting by streaming the bytes out as soon as they come in, with a fixed buffer size to prevent blowing up memory. It can stream hundreds of zips files in parallel using only a few MB of memory.
If you need this to be part of your application, here are some suggestions.
Disable compression will save CPU, and a bit of memory. Depending on your files, compression might not be a major benefit (jpegs actually get bigger after zip compression). If you're zipping just to combine many files into one, this will really help. But this doesn't explain using GB of memory.
Ensure you're not holding the stream content any longer than you need to, it looks like you are. Start streaming back asap as #Stephen suggested with StartAsync.

Confluent Batch Consumer. Consumer not working if Time out is specified

I am trying to consume a max of 1000 messages from kafka at a time. (I am doing this because i need to batch insert into MSSQL.) I was under the impression that kafka keeps an internal queue which fetches messages from the brokers and when i use the consumer.consume() method it just checks if there are any messages in the internal queue and returns if it finds something. otherwise it just blocks until the internal queue is updated or until timeout.
I tried to use the solution suggested here: https://github.com/confluentinc/confluent-kafka-dotnet/issues/1164#issuecomment-610308425
but when i specify TimeSpan.Zero (or any other timespan up to 1000ms) the consumer never consumes any messages. but if i remove the timeout it does consume messages but then i am unable to exit the loop if there are no more messages left to be read.
I also saw an other question on stackoverflow which suggested to read the offset of the last message sent to kafka and then read messages until i reach that offset and then break from the loop. but currently i only have one consumer and 6 partitions for a topic. I haven't tried it yet but i think managing offsets for each of the partition might make the code messy.
Can someone please tell me what to do?
static List<RealTime> getBatch()
{
var config = new ConsumerConfig
{
BootstrapServers = ConfigurationManager.AppSettings["BootstrapServers"],
GroupId = ConfigurationManager.AppSettings["ConsumerGroupID"],
AutoOffsetReset = AutoOffsetReset.Earliest,
};
List<RealTime> results = new List<RealTime>();
List<string> malformedJson = new List<string>();
using (var consumer = new ConsumerBuilder<Ignore, string>(config).Build())
{
consumer.Subscribe("RealTimeTopic");
int count = 0;
while (count < batchSize)
{
var consumerResult = consumer.Consume(1000);
if (consumerResult?.Message is null)
{
break;
}
Console.WriteLine("read");
try
{
RealTime item = JsonSerializer.Deserialize<RealTime>(consumerResult.Message.Value);
results.Add(item);
count += 1;
}
catch(Exception e)
{
Console.WriteLine("malformed");
malformedJson.Add(consumerResult.Message.Value);
}
}
consumer.Close();
};
Console.WriteLine(malformedJson.Count);
return results;
}
I found a workaround.
For some reason the consumer first needs to be called without a timeout. That means it will wait for a message until it gets at least one. after that using consume with timeout zero fetches all the rest of the messages one by one from the internal queue. this seems to work out for the best.
I had a similar problem, updating the Confluent.Kafka and lidrdkafka libraries from version 1.8.2 to 2.0.2 helped

How to correlate two AppInsights resources that communicate through NServiceBus?

Currently, I have dozens of .NET services hosted on various machines that show up as Resources on my AppInsights Application Map, which also shows their dependencies with respect to each other, based on the HTTP requests they make.
However, the relationships between services that communicate through NServiceBus (RabbitMQ) are not shown. Now, I am able to show the messages that are either sent or handled by a service via calls to TelemetryClient.TrackXXX(), but not connect Resources on the map using this information.
I have even gone so far as to attach the parent operation ID from the NSB message sender to the message itself, and assign it to the telemetry object in the receiver, but there is still no line drawn between the services in the Application Map.
To reiterate, this is what I'm getting in the Application Map:
(NSB Message Sender) --> (Message sent/handled)
And this is what I want:
(NSB Sender) --> (Receiver)
The services in question are .NET Core 3.1.
I cannot provide the code, as this is for my work, but any help would be greatly appreciated. I've searched everywhere, and even sources that seemed like they would help, didn't.
(not signed in, posting from work)
Alright, I finally got it. My approach to correlate AppInsights resources using their NSB communication is to mimic HTTP telemetry correlation.
Below is an extension method I wrote for AppInsights' TelemetryClient. I made a subclass named RbmqMessage:NServiceBus.IMessage, given my applications use RBMQ, and gave it the following properties for the sake of correlation (all set in the service that sends the message) :
parentId: equal to DependencyTelemetry.Id
opId: value is the same in the sender's DependencyTelemetry and the receiver's RequestTelemetry. Equal to telemetry.context.operation.id
startTime: DateTime.Now was good enough for my purposes
The code in the service that sends the NSB message:
public static RbmqMessage TrackRbmq(this TelemetryClient client, RbmqMessage message)
{
var msg = message;
// I ran into some issues with Reflection
var classNameIdx = message.ToString().LastIndexOf('.') + 1;
var messageClassName = message.ToString().Substring(classNameIdx);
var telemetry = new DependencyTelemetry
{
Type = "RabbitMQ",
Data = "SEND "+messageClassName,
Name = "SEND "+messageClassName,
Timestamp = DateTime.Now,
Target = "RECEIVE "+messageClassName //matches name in the service receiving this message
};
client.TrackDependency(telemetry);
msg.parentId = telemetry.Id;
msg.opId = telemetry.Context.Operation.Id; //this wont have a value until TrackDependency is called
msg.startTime = telemetry.Timestamp;
return msg;
}
The code where you send the NSB message:
var msg = new MyMessage(); //make your existing messages inherit RbmqMessage
var correlatedMessage = _telemetryClient.TrackRbmq(msg);
MessageSession.Publish(correlatedMessage); //or however the NSB message goes out in your application
The extension method in the NServiceBus message-receiving service:
public static void TrackRbmq(this TelemetryClient client, RbmqMessage message)
{
var classnameIdx = message.ToString().LastIndexOf('.')+1;
var telemetry = new RequestTelemetry
{
Timestamp = DateTime.Now,
Name = "RECEIVE "+message.ToString().Substring(classNameIdx)
};
telemetry.Context.Operation.ParentId = message.parentId;
telemetry.Context.Operation.Id = message.opId;
telemetry.Duration = message.startTime - telemetry.Timestamp;
client.TrackRequest(telemetry);
}
And finally, just track and send the message:
var msg = new MyMessage();
_telemetryClient.TrackRbmq(msg);
MessagePipeline.Send(msg); //or however its sent in your app
I hope this saves someone the trouble I went through.

Asynchronous hive query execution : OperationHandle gets cleaned up at server side as soon as the query initiator client disconnects

Is it possible to execute a query asynchronously in hive server?
For eg, How can I /Is it possible to do something like this from the client-
QueryHandle handle = executeAsyncQuery(hiveQuery);
Status status = handle.checkStatus();
if(status.isCompleted()) {
QueryResult result = handle.fetchResult();
}
I also had a look at How do I make an async call to Hive in Java?. But did not help. The answers were mostly around the thrift clients taking a callback argument.
Any help would be appreciated. Thanks!
[EDIT 1]
I went through the HiveConnection.java in hive-jdbc. hive-jdbc by default uses the async thrift APIs. Hence it submits a query and polls for result sets (look at HiveStatement.java). Now i am able to write a piece of code which is purely non blocking. But the problem is as soon as the client disconnect the foot print about the query is lost.
Client 1
final TCLIService.Client client = new TCLIService.Client(createBinaryTransport(host, port, loginTimeout, sessConf, false)); // from HiveConnection.java
TSessionHandle sessionHandle = openSession(client) // from HiveConnection.java
TExecuteStatementReq execReq = new TExecuteStatementReq(sessionHandle, sql);
execReq.setRunAsync(true);
execReq.setConfOverlay(sessConf);
final TGetOperationStatusReq handle = client.ExecuteStatement(execReq)
writeHandleToFile("~/handle", handle)
Client 2
final TGetOperationStatusReq handle = readHandleFromFile("~/handle")
final TCLIService.Client client = new TCLIService.Client(createBinaryTransport(host, port, loginTimeout, sessConf, false));
while (true) {
System.out.println(client.GetOperationStatus(handle).getOperationState());
Thread.sleep(1000);
}
Client 2 keeps printing FINISHED_STATE as long as Client 1 is alive. But if client 1 process completes or gets killed, client 2 starts printing null which means hiveserver2 is cleaning up the resources as soon as a client disconnects.
Is it possible to configure hiveserver2 to configure this clean up process based on time or something?
Thanks!
Did some research and figured out that this happens only with binary transport (tcp)
#Override
public void deleteContext(ServerContext serverContext,
TProtocol input, TProtocol output) {
Metrics metrics = MetricsFactory.getInstance();
if (metrics != null) {
try {
metrics.decrementCounter(MetricsConstant.OPEN_CONNECTIONS);
} catch (Exception e) {
LOG.warn("Error Reporting JDO operation to Metrics system", e);
}
}
ThriftCLIServerContext context = (ThriftCLIServerContext) serverContext;
SessionHandle sessionHandle = context.getSessionHandle();
if (sessionHandle != null) {
LOG.info("Session disconnected without closing properly, close it now");
try {
cliService.closeSession(sessionHandle);
} catch (HiveSQLException e) {
LOG.warn("Failed to close session: " + e, e);
}
}
}
The above stub (from ThriftBinaryCLIService) gets executed through this piece of code from TThreadPoolServer which is used by ThriftBinaryCLIService.
eventHandler.deleteContext(connectionContext, inputProtocol,
outputProtocol);
Apparently http transport (ThriftHttpCLIService) has a different strategy of cleaning up operation handles (not greedy like tcp)
Will check with hive community on this to understand a bit more and see if there is an issue addressing this already.

asp.net membership, notify administrators when account about to expire

I have a requirement that a certain email distribution list should be notified every so often (still to be determined) about user accounts that are nearing expiration.
I'm wondering the best way to achieve this, I know its generally a bad idea to spawn another thread within asp.net to handle this type of thing, so I'm thinking maybe a simple service is the way to go but for something so small this seems like it might be slightly overkill.
Ideally I'd like something that doesnt require much babysitting (eg. checking service is running).
I have also suggested having a page in the site with this type of information but it is likely that it could be a few days before this is checked. We also cannot let users extend their own expiration date.
Are there any other viable options.
The best suitable method to work on it according to is
create a application which will select list of all users whose account expiry date is nearby (eg. 10 days from today) as per your requirement.
This application will be scheduled as an daily execution (you will create an exe with log file to display errors raised and total number of emails sent in one execution.)
This application will fetch all the records based on criteria and send the emails to all yours using the basic HTML template. and once the email is sent, you will update a column (notificationFlag) in your database as 1 if you have sent is once in last 10 days. else by default it will be 0
you can schedule the exe by the end of the day at 12:10 am (just incase your database server and webserver is not matching in time) every day. .
This is something I've done which is similar to Prescott's comment on your answer.
I have a website with an administrative page that reports on a bunch of expiration dates.
This page also accepts a QueryString parameter SEND_EMAILS, so anytime an administrative user of the site passes the QueryString parameter SEND_EMAILS=true a bunch of emails go out to all the users that are expiring.
Then I just added a windows scheduled task to run daily and load the page with the SEND_EMAILS=true parameter.
This was the simple code I used to issue the webrequest from the console in the scheduled task:
namespace CmdLoadWebsite
{
class Program
{
static void Main(string[] args)
{
string url = "http://default/site/";
if (args.Length > 0)
{
url = args[0];
}
Console.WriteLine(GetWebResult(url));
}
public static string GetWebResult(string url)
{
byte[] buff = new byte[8192];
StringBuilder sb = new StringBuilder();
HttpWebRequest request = (HttpWebRequest) WebRequest.Create(url);
HttpWebResponse response = (HttpWebResponse) request.GetResponse();
Stream webStream = response.GetResponseStream();
int count = 0;
string webString;
do
{
count = webStream.Read(buff, 0, buff.Length);
if (count != 0)
{
webString = Encoding.ASCII.GetString(buff, 0, count);
sb.Append(webString);
}
}
while (count > 0);
return(sb.ToString());
}
}
}

Resources