I'm having problems to execute a https requests, if the request don't have any error i never get the message, this is a command line tool application and i have a plist to allow http requests, i always see the completion block.
typealias escHandler = ( URLResponse?, Data? ) -> Void
func getRequest(url : URL, _ handler : #escaping escHandler){
let session = URLSession.shared
var request = URLRequest(url:url)
request.cachePolicy = .reloadIgnoringLocalCacheData
request.httpMethod = "GET"
let task = session.dataTask(with: url ){ (data,response,error) in
handler(response,data)
}
task.resume()
}
func startOp(action : #escaping () -> Void) -> BlockOperation{
let exOp = BlockOperation(block: action)
exOp.completionBlock = {
print("Finished")
}
return exOp
}
for sUrl in textFile.components(separatedBy: "\n"){
let url = URL(string: sUrl)!
let queu = startOp {
getRequest(url: url){ response, data in
print("REACHED")
}
}
operationQueue.addOperation(queu)
operationQueue.waitUntilAllOperationsAreFinished()
One problem is that your operation is merely starting the request, but because the request is performed asynchronously, the operation is immediately completing, not actually waiting for the request to finish. You don't want to complete the operation until the asynchronous request is done.
If you want to do this with operation queues, the trick is that you must subclass Operation and perform the necessary KVO for isExecuting and isFinished. You then change isExecuting when you start the request and isFinished when you finish the request, with the associated KVO for both. This is all outlined in the Concurrency Programming Guide: Defining a Custom Operation Object, notably in the Configuring Operations for Concurrent Execution section. (Note, this guide is a little outdated (it refers to the isConcurrent property, which has been replaced is isAsynchronous; it's focusing on Objective-C; etc.), but it introduces you to the issues.
Anyway, This is an abstract class that I use to encapsulate all of this asynchronous operation silliness:
/// Asynchronous Operation base class
///
/// This class performs all of the necessary KVN of `isFinished` and
/// `isExecuting` for a concurrent `NSOperation` subclass. So, to developer
/// a concurrent NSOperation subclass, you instead subclass this class which:
///
/// - must override `main()` with the tasks that initiate the asynchronous task;
///
/// - must call `completeOperation()` function when the asynchronous task is done;
///
/// - optionally, periodically check `self.cancelled` status, performing any clean-up
/// necessary and then ensuring that `completeOperation()` is called; or
/// override `cancel` method, calling `super.cancel()` and then cleaning-up
/// and ensuring `completeOperation()` is called.
public class AsynchronousOperation : Operation {
override public var isAsynchronous: Bool { return true }
private let lock = NSLock()
private var _executing: Bool = false
override private(set) public var isExecuting: Bool {
get {
return lock.synchronize { _executing }
}
set {
willChangeValue(forKey: "isExecuting")
lock.synchronize { _executing = newValue }
didChangeValue(forKey: "isExecuting")
}
}
private var _finished: Bool = false
override private(set) public var isFinished: Bool {
get {
return lock.synchronize { _finished }
}
set {
willChangeValue(forKey: "isFinished")
lock.synchronize { _finished = newValue }
didChangeValue(forKey: "isFinished")
}
}
/// Complete the operation
///
/// This will result in the appropriate KVN of isFinished and isExecuting
public func completeOperation() {
if isExecuting {
isExecuting = false
isFinished = true
}
}
override public func start() {
if isCancelled {
isFinished = true
return
}
isExecuting = true
main()
}
}
And I use this Apple extension to NSLocking to make sure I synchronize the state changes in the above (theirs was an extension called withCriticalSection on NSLock, but this is a slightly more generalized rendition, working on anything that conforms to NSLocking and handles closures that throw errors):
extension NSLocking {
/// Perform closure within lock.
///
/// An extension to `NSLocking` to simplify executing critical code.
///
/// - parameter block: The closure to be performed.
func synchronize<T>(block: () throws -> T) rethrows -> T {
lock()
defer { unlock() }
return try block()
}
}
Then, I can create a NetworkOperation which uses that:
class NetworkOperation: AsynchronousOperation {
var task: URLSessionTask!
init(session: URLSession, url: URL, requestCompletionHandler: #escaping (Data?, URLResponse?, Error?) -> ()) {
super.init()
task = session.dataTask(with: url) { data, response, error in
requestCompletionHandler(data, response, error)
self.completeOperation()
}
}
override func main() {
task.resume()
}
override func cancel() {
task.cancel()
super.cancel()
}
}
Anyway, having done that, I can now create operations for network requests, e.g.:
let queue = OperationQueue()
queue.name = "com.domain.app.network"
let url = URL(string: "http://...")!
let operation = NetworkOperation(session: .shared, url: url) { data, response, error in
guard let data = data, error == nil else {
print("\(error)")
return
}
let string = String(data: data, encoding: .utf8)
print("\(string)")
// do something with `data` here
}
let operation2 = BlockOperation {
print("done")
}
operation2.addDependency(operation)
queue.addOperations([operation, operation2], waitUntilFinished: false) // if you're using command line app, you'd might use `true` for `waitUntilFinished`, but with standard Cocoa apps, you generally would not
Note, in the above example, I added a second operation that just printed something, making it dependent on the first operation, to illustrate that the first operation isn't completed until the network request is done.
Obviously, you would generally never use the waitUntilAllOperationsAreFinished of your original example, nor the waitUntilFinished option of addOperations in my example. But because you're dealing with a command line app that you don't want to exit until these requests are done, this pattern is acceptable. (I only mention this for the sake of future readers who are surprised by the free-wheeling use of waitUntilFinished, which is generally inadvisable.)
Related
So i'm testing with Blazor and gRPC and my dificulty at the moment is on how to pass the content of a variable that is on a class, specifically the gRPC GreeterService Class to the Blazor page when new information arrives. Notice that my aplication is a client and a server, and i make an initial comunication for the server and then the server starts to send to the client data(numbers) in unary mode, every time it has new data to send. I have all this working, but now i'm left it that final implementation.
This is my Blazor page
#page "/greeter"
#inject GrpcService1.GreeterService GreeterService1
#using BlazorApp1.Data
<h1>Grpc Connection</h1>
<input type="text" #bind="#myID" />
<button #onclick="#SayHello">SayHello</button>
<p>#Greetmsg</p>
<p></p>
#code {
string Name;
string Greetmsg;
async Task SayHello()
{
this.Greetmsg = await this.GreeterService1.SayHello(this.myID);
}
}
The method that later receives the communication from the server if the hello is accepted there is something like this:
public override async Task<RequestResponse> GiveNumbers(BalconyFullUpdate request, ServerCallContext context)
{
RequestResponse resp = new RequestResponse { RequestAccepted = false };
if (request.Token == publicAuthToken)
{
number = request.Number;
resp = true;
}
return await Task.FromResult(resp);
}
Every time that a new number arrives i want to show it in the UI.
Another way i could do this was, within a while condition, i could do a call to the server requesting a new number just like the SayHello request, that simply awaits for a server response, that only will come when he has a new number to send. When it comes the UI is updated. I'm just reluctant to do it this way because i'm afraid that for some reason the client request is forgotten and the client just sit's there waiting for a response that will never come. I know that i could implement a timeout on the client side to handle that, and on the server maybe i could pause the response, with a thread pause or something like that, and when the method that generates the new number has a new number, it could unpause the response to the client(no clue on how to do that). This last solution looks to me much more difficult to do than the first one.
What are your thoughts about it? And solutions..
##################### UPDATE ##########################
Now i'm trying to use a singleton, grab its instance in the Blazor page, and subcribe to a inner event of his.
This is the singleton:
public class ThreadSafeSingletonString
{
private static ThreadSafeSingletonString _instance;
private static readonly object _padlock = new object();
private ThreadSafeSingletonString()
{
}
public static ThreadSafeSingletonString Instance
{
get
{
if (_instance == null)
{
lock(_padlock)
{
if (_instance == null)
{
_instance = new ThreadSafeSingletonString();
_instance.number="";
}
}
}
return _instance;
}
set
{
_instance.number= value.number;
_instance.NotifyDataChanged();
}
}
public int number{ get; set; }
public event Action OnChange;
private void NotifyDataChanged() => OnChange?.Invoke();
And in Blazor page in code section i have:
protected override void OnInitialized()
{
threadSafeSingleton.OnChange += updateNumber();
}
public System.Action updateNumber()
{
this.fromrefresh = threadSafeSingleton.number + " que vem.";
Console.WriteLine("Passou pelo UpdateNumber");
this.StateHasChanged();
return StateHasChanged;
}
Unfortunatly the updatenumber function never gets executed...
To force a refresh of the ui you can call the StateHasChanged() method on your component:
https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.components.componentbase.statehaschanged?view=aspnetcore-3.1
Notifies the component that its state has changed. When applicable, this will cause the component to be re-rendered.
Hope this helps
Simple Request
After fully understanding that your problem is just to Update the Page not to get unsyncronous messages from the server with a bi directional connection. So jou just have to change your page like (please not there is no need to change the files generated by gRPC, I called it Number.proto so my service is named NumberService):
async Task SayHello()
{
//Request via gRPC
var channel = new Channel(Host + ":" + Port, ChannelCredentials.Insecure);
var client = new this.NumberService.NumberServiceClient(channel);
var request = new Number{
identification = "ABC"
};
var result = await client.SendNumber(request).RequestAccepted;
await channel.ShutdownAsync();
//Update page
this.Greetmsg = result;
InvokeAsync(StateHasChanged);//Required to refresh page
}
Bi Directional
For making a continious bi directional connection you need to change the proto file to use streams like:
service ChatService {
rpc chat(stream ChatMessage) returns (stream ChatMessageFromServer);
}
This Chant sample is from the https://github.com/meteatamel/grpc-samples-dotnet
The main challenge on this is do divide the task waiting for the gRPC server from the client. I found out that BackgroundService is good for this. So create a Service inherited from BackgroundService where place the while loop waiting for the server in the ExecuteAsyncmethod. Also define a Action callback to update the page (alternative you can use an event)
public class MyChatService : BackgroundService
{
Random _random = new Random();
public Action<int> Callback { get; set; }
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
// Replace next lines with the code request and wait for server...
using (_call = _chatService.chat())
{
// Read messages from the response stream
while (await _call.ResponseStream.MoveNext(CancellationToken.None))
{
var serverMessage = _call.ResponseStream.Current;
var otherClientMessage = serverMessage.Message;
var displayMessage = string.Format("{0}:{1}{2}", otherClientMessage.From, otherClientMessage.Message, Environment.NewLine);
if (Callback != null) Callback(displayMessage);
}
// Format and display the message
}
}
}
}
On the page init and the BackgroundService and set the callback:
#page "/greeter"
#using System.Threading
<p>Current Number: #currentNumber</p>
#code {
int currentNumber = 0;
MyChatService myChatService;
protected override async Task OnInitializedAsync()
{
myChatService = new MyChatService();
myChatService.Callback = i =>
{
currentNumber = i;
InvokeAsync(StateHasChanged);
};
await myChatService.StartAsync(new CancellationToken());
}
}
More information on BackgroundService in .net core can be found here: https://gunnarpeipman.com/dotnet-core-worker-service/
Using the Twitch API and vert.x - I'm looking to continuously send requests to Twitch's API using a WebClient and Twitch's cursor response to go page by page. However I'm not sure how to go back and keep doing queries until a condition is met due to vert.x's asynchronous nature.
Here's my code so far
public void getEntireStreamList(Handler<AsyncResult<JsonObject>> handler) {
JsonObject data = new JsonObject();
getLiveChannels(100, result -> {
if(result.succeeded()) {
JsonObject json = result.result();
String cursor = json.getJsonObject("pagination").getString("cursor");
data.put("data", json.getJsonArray("data"));
if(json.getJsonArray("data").size() < 100) { // IF NOT LAST PAGE
// GO BACK AND DO AGAIN WITH CURSOR IN REQUEST
}
handler.handle(Future.succeededFuture(data));
} else
handler.handle(Future.failedFuture(result.cause()));
});
}
Ideally I'd be able to call getLiveChannels with the cursor String from the previous request to continue the search.
You will need to use Future composition.
Here's my code for your problem:
public void getEntireStreamList(Handler<AsyncResult<JsonObject>> handler) {
JsonArray data = new JsonArray();
// create initial Future for first function call
Future<JsonObject> initFuture = Future.future();
// complete Future when getLiveChannels completes
// fail on exception
getLiveChannels(100, initFuture.completer());
// Create a callback that returns a Future
// for composition.
final AtomicReference<Function<JsonObject, Future<JsonObject>>> callback = new AtomicReference<>();
// Create Function that calls composition with itself.
// This is similar to recursion.
Function<JsonObject, Future<JsonObject>> cb = new Function<JsonObject, Future<JsonObject>>() {
#Override
public Future<JsonObject> apply(JsonObject json) {
// new Future to return
Future<JsonObject> f = Future.future();
// Do what you wanna do with the data
String cursor = json.getJsonObject("pagination").getString("cursor");
data.addAll(json.getJsonArray("data"));
// IF NOT LAST PAGE
if(json.getJsonArray("data").size() == 100) {
// get more live channels with cursor
getLiveChannels(100, cursor, f.completer());
// return composed Future
return f.compose(this);
}
// Otherwise return completed Future with results.
f.complete(new JsonObject().put("data", data));
return f;
}
};
Future<JsonObject> composite = initFuture.compose(cb);
// Set handler on composite Future (ALL composed futures together)
composite.setHandler(result -> handler.handle(result));
}
The code + comments should speak for themselves if you read the Vert.x docs on sequential Future composition.
I come from a C# background and would like to implement awaiting functionality in my Swift app. I've achieved my desired results but I had to use a semaphore which I'm not sure is good practice. I have a function with an alamo request that returns a JSON with a success value and as I understand it that request function is async with a completion handler. The handler fires once the request is complete. The problem is returning the success value from that operation. Here's a psuedo-code example of what I'm doing:
func AlamoTest() -> Bool{
var success = false
//Do some things...
//...
//Signal from async code
let semaphore = DispatchSemaphore(value: 0)
Alamofire.request("blah blah blah", method: .post, parameters: parameters, encoding: URLEncoding.default).responseJSON { response in {
success = response["success"]
if(success){
//Do some more things
}
semaphore.signal() //Signal async code is done
}
//Wait until async code done to get result
semaphore.wait(timeout: DispatchTime.distantFuture)
return success
}
Is there a "better" way of achieving my goal? I'm new to Swift and its async constructs.
Best solution I found is what I call "callback chaining". Example of my method looks like this:
func postJSON(json: NSDictionary, function: ServerFunction, completionHandler: ((_ jsonResponse: NSDictionary) -> Void)? = nil) {
//Create json payload from dictionary object
guard let payload = serializeJSON(json: json) else {
print("Error creating json from json parameter")
return
}
//Send request
Alamofire.request(urlTemplate(function.rawValue), method: .post, parameters: payload, encoding: URLEncoding.default).validate().responseJSON { response in
//Check response from server
switch response.result {
case .success(let data):
let jsonResponse = data as! NSDictionary
print("\(jsonResponse)")
//Execute callback post request handler
if completionHandler != nil {
completionHandler!(jsonResponse)
}
case .failure(let error):
print("Shit didn't work!\(error)")
}
}
}
The last parameter is a closure that executes once the orginal async operation is complete. You pass in the result to the closure and do what you want with it. In my case I wanted to disable the view while the async operations were rolling. You can enable the view in your closure argument since the result from the alamo async operation is called on the main thread. completionHandler defaults to nil if you don't need the result and stops the chaining.
You can use this framework for Swift coroutines - https://github.com/belozierov/SwiftCoroutine
func AlamoTest() throws -> Bool {
try Coroutine.await() { callback in
Alamofire.request("blah blah blah", method: .post, parameters: parameters, encoding: .default).responseJSON { response in
let success = response["success"]
callback(success)
}
}
}
and then call this method inside coroutine:
DispatchQueue.main.startCoroutine {
let result = try AlamoTest()
}
I wanna make a web crawling, currently i am reading a txt file with 12000 urls, i wanna use concurrency in this process, but the requests don't work.
typealias escHandler = ( URLResponse?, Data? ) -> Void
func getRequest(url : URL, _ handler : #escaping escHandler){
let session = URLSession(
configuration: .default,
delegate: nil,
delegateQueue: nil)
var request = URLRequest(url:url)
request.httpMethod = "GET"
let task = session.dataTask(with: request){ (data,response,error) in
handler(response,data)
}
task.resume()
}
for sUrl in textFile.components(separatedBy: "\n"){
let url = URL(string: sUrl)!
getRequest(url: url){ response,data in
print("RESPONSE REACHED")
}
}
If you have your URLSessions working correctly, all you need to go is create separate OperationQueue create a Operation for each of your async tasks you want completed, add it to your operation queue, and set your OperationQueue's maxConcurrentOperationCount to control how many of your tasks can run at one time. Puesdo code:
let operationQueue = OperationQueue()
operationQueue.qualityOfService = .utility
let exOperation = BlockOperation(block: {
//Your URLSessions go here.
})
exOperation.completionBlock = {
// A completionBlock if needed
}
operationQueue.addOperation(exOperation)
exOperation.start()
Using a OperationQueue subclass and Operation subclass will give you additional utilities for dealing with multiple threads.
I am having an issue while using the RxJava concat operator. I have two observables, the first emits results from a server database and the other one emits results from the local database, and then I concat the :
// Uses a Realm in the UI thread
Observable<MyResult> remoteObservable = mRemoteDataSource.find(tId);
// Uses Retrofit
Observable<MyResult> localObservable = mLocalDataSource.find(tId);
Observable.concat(localObservable, remoteObservable)
.doOnNext(result -> /* Do my stuff */)
.observeOn(AndroidSchedulers.mainThread())
.doOnError(throwable -> throwable.printStackTrace())
.subscribe()
So this causes me problem, since I am not using subscribeOn() the concatenated observable is running on AndroidScheduler.MainThread() and this does not run the remote and it launches a NetworkOnMainThreadException.
If I implement a subscribeOn(Schedulers.computation()) I get Realm access from incorrect thread. Realm objects can only be accessed on the thread they were created since of course the Observable is not running on the thread the Realm instance does exist.
I have searched in other questions and I have not gotten anything useful, I have checked the example made by realm: https://github.com/realm/realm-java/blob/master/examples/rxJavaExample/src/main/java/io/realm/examples/rxjava/retrofit/RetrofitExample.java but strangely I see that the retrofit observable is subscribed on nothing and it works.
Why does it work on the sample and in my code I cannot do the same? Any suggestion?
I believe you should use subscribeOn() in the right places.
// Uses a Realm in the UI thread
Observable<MyResult> realmObservable = mRealmDataSource.find(tId).subscribeOn(AndroidSchedulers.mainThread());
// Uses Retrofit
Observable<MyResult> retrofitObservable = mRetrofitDataSource.find(tId).subscribeOn(Subscribers.io());
Observable.concat(realmObservable, retrofitObservable)
.doOnNext(result -> /* Do my stuff */)
.subscribeOn(AndroidSchedulers.mainThread())
.observeOn(AndroidSchedulers.mainThread())
.doOnError(throwable -> throwable.printStackTrace())
.subscribe()
See if it fixes your issue.
You can concat your local and remote observables like below:
// Uses a Realm in the UI thread
Observable<MyResult> remoteObservable = mRemoteDataSource.find(tId);
// Uses Retrofit
Observable<MyResult> localObservable = mLocalDataSource.find(tId);
Observable.concat(localObservable, remoteObservable).first()
.map(new Func1<MyResult, MyResult>() {
#Override
public myResult call(MyResult result) {
if (result == null) {
throw new IllegalArgumentException();
}
return result;
}
});
And subscribe like below:
CompositeSubscription mCompositeSubscription = new CompositeSubscription();
final Subscription subscription = mRepo.find(tId
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe(new Observer<MyResult>() {
#Override
public void onCompleted() {
// Completed
}
#Override
public void onError(Throwable e) {
// onError
}
#Override
public void onNext(MyResult result) {
//onSuccess
}
});
mCompositeSubscription.add(subscription);
You can check this repo for RxJava + Retrofit + Realm
https://github.com/savepopulation/wikilight
Good luck!
Instead of using subscribeOn at mRealmDataSource.find(tId).subscribeOn(AndroidSchedulers.mainThread())
like said : https://stackoverflow.com/a/39304891/2425851
You can use Observable.defer
for example:
class RealmDataSource{
fun find(id: String): Observable<MyResult> {
// Default pattern for loading data on a background thread
return Observable.defer{
val realm = Realm.getInstance()
val query = realm
.where(MyResult::class.java)
val flowable =
if (realm.isAutoRefresh) {
query
.findAllAsync()
.asFlowable()
.filter(RealmResults::isLoaded)
} else {
Flowable.just(query.findAll())
}
return#defer flowable
.toObservable()
}
}
Then usage will be without subscribeOn
// Uses a Realm
Observable<MyResult> realmObservable = mRealmDataSource.find(tId);
// Uses Retrofit
Observable<MyResult> remoteObservable = mRemoteDataSource.find(tId);
For more info see https://realm.io/blog/realm-java-0-87-0/