The gRPC interceptor is applied to the server through ServerOption. see the doc
How can I apply interceptor at service level. For example, I may only need to apply the authenticator interceptor for protected service. Is this possible in go?
In addition of previous answer by eric, you can do something like :
type key int
const (
sessionIDKey key = iota
)
var (
needTobeAllocate = [1]string{"allocate"}
)
func run() {
// server option
opts := []grpc.ServerOption{}
opts = append(opts, grpc.UnaryInterceptor(unaryInterceptor))
}
func unaryInterceptor(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) {
if checkAllocate(info.FullMethod) {
ctx = context.WithValue(ctx, sessionIDKey, "testsession")
}
return handler(ctx, req)
}
func checkAllocate(method string) bool {
for _, v := range needTobeAllocate {
if strings.Contains(strings.ToLower(method), v) {
return true
}
}
return false
}
A gRPC Go interceptor receives a StreamServerInfo (or a UnaryServerInfo) param that contains, among other things, the service and method name of the call. You can use that to filter interceptor behavior based on service/method.
Related
I need to organize the communication between two goland services but have some problems
func main() {
ctxWithCancel, cancel := context.WithCancel(context.Background())
sig := make(chan os.Signal)
signal.Notify(sig, os.Interrupt)
defer signal.Stop(sig)
wg := &sync.WaitGroup{}
for i := range conf.Services {
if conf.Services[i].Type == "http" {
// call returns HttpService service
http.NewHttpService(ctxWithCancel, wg, &conf.Services[i])
} else if conf.Services[i].Type == "tarantool" {
// call returns Tarantool service
tarantool.NewTarantoolService(ctxWithCancel, wg, &conf.Services[i])
} else {
log.Fatalf("Unknown service name: %v\n", conf.Services[i].Type)
}
}
select {
case <-sig:
cancel()
wg.Wait()
}
}
HttpService and TarantoolService are services with infinite loop and the request handlers(runs as go subroutine).
NewHttpService is http web server and on the request it should take some data from NewTarantoolService service synchronously.
The question is how to communicate HttpService with TarantoolService. Of course I could pass TarantoolService to the http.NewHttpService as parameter - but I want to use disconnectedness services. Please explain how I could achieve this?
You can use https://gobyexample.com/channels. If your NewTarantoolService has an infinite cycle you could use the following construction:
for {
select {
case msg :=<-fromHttpChannel:
//Do something
toHttpChannel<- msg
case <-ctx.Done():
return
}
}
I have an HTTP server that when it recieves a request calls on an underlying gRPC server.
I have chosen to abstract away the gRPC call with an interface, to make testing of the http server easier.
The problem is that I am constantly getting the errors:
rpc error: code = Canceled desc = grpc: the client connection is closing
or
rpc error: code = Canceled desc = context canceled
And as I understand both of these are related to the context getting passed into the grpc call. And that I want the context to be alive throughout both the HTTP and gRPC calls.
type SetterGetter interface {
Getter(key string) (val string)
}
type Service struct {
sg SetterGetter
ctx context.Context
}
func (s *Service) getHandler(rw http.ResponseWriter, r *http.Request) {
key := r.URL.Query()["key"][0]
res := s.sg.Getter(key)
fmt.Fprintf(rw, "Successfully got value: %s\n", res)
}
func main() {
s := new(Service)
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
s.sg = gc.NewClientwrapper(ctx)
http.HandleFunc("/get", s.getHandler)
log.Fatal(http.ListenAndServe(port, nil))
}
And my Getter implementation looks like this:
type clientwrapper struct {
sc pb.ServicesClient
ctx context.Context
}
func NewClientwrapper(ctx context.Context) *clientwrapper {
cw := new(clientwrapper)
conn, err := grpc.Dial(address, grpc.WithInsecure(), grpc.WithBlock())
if err != nil {
err = fmt.Errorf("Error could not dial address: %v", err)
}
defer conn.Close()
cw.ctx = ctx
cw.sc = pb.NewServicesClient(conn)
return cw
}
func (cw *clientwrapper) Getter(key string) (val string) {
// Make the GRPC request
res, err := cw.sc.Get(cw.ctx, &pb.GetRequest{Key: key})
if err != nil {
return ""
}
getVal := res.GetValue()
return getVal
}
So here I am creating a context in my http servers main menu, and passing it onwards. I do it like this because it worked if I removed my interface and put everything in the main file.
I have also tried to create the context both in the http handler and passing it to the Getter and I have also tried creating it in the Getter itself.
I think the correct approach is to create the context in the http request using the context that gets created by the request and then passing it to the grpc Getter. Like such:
func (s *Service) getHandler(rw http.ResponseWriter, r *http.Request) {
// Create it like such
ctx, cancel := context.WithTimeout(r.Context(), 100*time.Second)
key := r.URL.Query()["key"][0]
// And pass it onwards (of course we need to change function signature for this to work)
res := s.sg.Getter(ctx, key)
fmt.Fprintf(rw, "Successfully got value: %s\n", res)
}
So how should I create my context here, to not get these errors?
If your goal is to keep a long-running task running in the background, that doesn't cancel when the request is finalized, then don't use the request's context. Use context.Background() instead.
For example:
func (s *Service) getHandler(rw http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(context.Background(), 100*time.Second)
// ...
So I'm trying to integrate Firebase performance for Http requests and add them manually as they show here (step 9).
I'm using Retrofit 2 and RxJava 2, so I had the idea of doing a custom operator, check code below:
Retrofit 2 Client
#GET("branch-{environment}/v2/branches")
fun getBranch(#Path("environment") environment: String, #Query("location") location: String, #Query("fulfilment_type") fulfilmentType: String): Single<Response<GetBranchResponse>>
RxJava Call to the Retrofit Client
private val client: BranchClient = clientFactory.create(urlProvider.apiUrl)
override fun getBranch(postCode: String, fulfilmentType: FulfilmentType): Single<GetBranchResponse> {
return client
.getBranch(environment, postCode.toUpperCase(), fulfilmentType.toString())
.lift(RxHttpPerformanceSingleOperator(URL?, METHOD?))
.map { it.body() }
.subscribeIO() //custom Kotlin extension
.observeMain() //custom Kotlin extension
...
}
RxJava 2 Custom Operator via lift:
class RxHttpPerformanceSingleOperator<T>(private val url: String, private val method: String) : SingleOperator<Response<T>, Response<T>> {
private lateinit var metric: HttpMetric
#Throws(Exception::class)
override fun apply(observer: SingleObserver<in Response<T>>): SingleObserver<in Response<T>> {
return object : SingleObserver<Response<T>> {
override fun onSubscribe(d: Disposable) {
metric = FirebasePerformance.getInstance().newHttpMetric(url,
method.toUpperCase())
metric.start()
observer.onSubscribe(d)
}
override fun onSuccess(t: Response<T>) {
observer.onSuccess(t)
//More info: https://firebase.google.com/docs/perf-mon/get-started-android
metric.setRequestPayloadSize(t.raw().body().contentLength())
metric.setHttpResponseCode(t.code())
metric.stop()
}
override fun onError(e: Throwable) {
observer.onError(e)
metric.stop()
}
}
}
So currently I'm not sure how it the proper way to get the URL and METHOD of the request (marked as URL? and METHOD? ) to send to the operator,
I need them on onSubscribe to start the metric.. and there I don't have the response with it...
Currently UGLYYYYYYYY my way to do it is:
Add to the Retrofit Client:
#GET("branch-{environment}/v2/branches")
fun getBranchURL(#Path("environment") environment: String, #Query("location") location: String, #Query("fulfilment_type") fulfilmentType: String): Call<JsonObject>
Add add the parameters as:
val request = client.getBranchURL(environment, postCode.toUpperCase(), fulfilmentType.toString()).request()
url = request.url().toString()
method = request.method()
This makes me have 2 entries on the Client for each request... which makes no sense.
Some helpful clues along the way:
- How to get the request url in retrofit 2.0 with rxjava?
Add a Retrofit Interceptor to your HttpClient.Builder with the FirebaseInstance and generate your HttpMetrics there:
class FirebasePerformanceInterceptor(val performanceInstance: FirebasePerformance) : Interceptor {
override fun intercept(chain: Interceptor.Chain): Response {
val request = chain.request()
//Get request values
val url = request.url().url()
val requestPayloadSize = request.body()?.contentLength() ?: 0L
val httpMethod = request.method()
//Initialize http trace
val trace = performanceInstance.newHttpMetric(url, httpMethod)
trace.setRequestPayloadSize(requestPayloadSize)
trace.start()
//Proceed
val response = chain.proceed(chain.request())
//Get response values
val responseCode = response.code()
val responsePayloadSize = response.body()?.contentLength() ?: 0L
//Add response values to trace and close it
trace.setHttpResponseCode(responseCode)
trace.setResponsePayloadSize(responsePayloadSize)
trace.stop()
return response
}
}
You can directly copy and paste this code and it will work.
Enjoy!
What I'm going to suggest doesn't necessarily goes to your approach, it's just a different way of thinking about what you're trying to accomplished.
I would suggest 2 different approaches:
Create your own observer (So create a class that extends Observer) that receives a Retrofit Call object and do your firebase logic in the subscribeActual method.
Use Aspectj to define an annotation that will be processed when the Retrofit call is about to be executed and you can do the firebase logic inside the Aspect. (I'm not sure how Aspectj and kotlin works tho)
I have been using GoRequest as a package in my Go application.
I use it as it helps make all the API calls I need to make a lot cleaner - one thing that it is lacking, that I am able to do with a regular http.Client is implement outbound rate limiting on the Transport.
For example, in one of my applications I use this -
type rateLimitTransport struct {
limiter *rate.Limiter
xport http.RoundTripper
}
var _ http.RoundTripper = &rateLimitTransport{}
func newRateLimitTransport(r float64, xport http.RoundTripper) http.RoundTripper {
return &rateLimitTransport{
limiter: rate.NewLimiter(rate.Limit(r), 1),
xport: xport,
}
}
func (t *rateLimitTransport) RoundTrip(r *http.Request) (*http.Response, error) {
t.limiter.Wait(r.Context())
return t.xport.RoundTrip(r)
}
var myClient = http.Client{
// Use a rate-limiting transport which falls back to the default
Transport: newRateLimitTransport(1, http.DefaultTransport),
}
This allows my to create a client, that is rate limited to 1 request per second; 60 requests per minute.
I have forked the GoRequest package and am trying to add a new method to it called SetRateLimit() which should take the rate per second as an argument and use this to implement a rate limited transport.
My attempt so far looks like this:
import (
//...current imports
"golang.org/x/time/rate"
)
//... Rest of the package
type rateLimitTransport struct {
limiter *rate.Limiter
xport http.RoundTripper
}
var _ http.RoundTripper = &rateLimitTransport{}
func newRateLimitTransport(r float64, xport http.RoundTripper) http.RoundTripper {
return &rateLimitTransport{
limiter: rate.NewLimiter(rate.Limit(r), 1),
xport: xport,
}
}
func (t *rateLimitTransport) RoundTrip(r *http.Request) (*http.Response, error) {
t.limiter.Wait(r.Context())
return t.xport.RoundTrip(r)
}
func (s *SuperAgent) SetRateLimit(limit float64) *SuperAgent {
s.Transport = newRateLimitTransport(limit, http.DefaultTransport)
return s
}
However I get an error when trying to build this:
cannot use newRateLimitTransport(limit, http.DefaultTransport) (type http.RoundTripper) as type *http.Transport in assignment: need type assertion
I've been looking at this for hours, and I don't quite understand how this works for a regular http.Client but doesn't work for this package.
Can someone please help me resolve the issue above and add rate limiting to this package?
Update - SuperRequest struct
// A SuperAgent is a object storing all request data for client.
type SuperAgent struct {
Url string
Method string
Header http.Header
TargetType string
ForceType string
Data map[string]interface{}
SliceData []interface{}
FormData url.Values
QueryData url.Values
FileData []File
BounceToRawString bool
RawString string
Client *http.Client
Transport *http.Transport
Cookies []*http.Cookie
Errors []error
BasicAuth struct{ Username, Password string }
Debug bool
CurlCommand bool
logger Logger
Retryable struct {
RetryableStatus []int
RetryerTime time.Duration
RetryerCount int
Attempt int
Enable bool
}
//If true prevents clearing Superagent data and makes it possible to reuse it for the next requests
DoNotClearSuperAgent bool
}
Error you are seeing says the issue.
cannot use newRateLimitTransport(limit, http.DefaultTransport) (type
http.RoundTripper) as type *http.Transport in assignment: need type
assertion
SuperAgent.Transport type is *http.Transport But you are trying to assign http.RoundTripper to that field
To fix this you could change the SuperaAgent
type SuperAgent{
...
Transport http.RoundTripper
...
}
I'm having problems to execute a https requests, if the request don't have any error i never get the message, this is a command line tool application and i have a plist to allow http requests, i always see the completion block.
typealias escHandler = ( URLResponse?, Data? ) -> Void
func getRequest(url : URL, _ handler : #escaping escHandler){
let session = URLSession.shared
var request = URLRequest(url:url)
request.cachePolicy = .reloadIgnoringLocalCacheData
request.httpMethod = "GET"
let task = session.dataTask(with: url ){ (data,response,error) in
handler(response,data)
}
task.resume()
}
func startOp(action : #escaping () -> Void) -> BlockOperation{
let exOp = BlockOperation(block: action)
exOp.completionBlock = {
print("Finished")
}
return exOp
}
for sUrl in textFile.components(separatedBy: "\n"){
let url = URL(string: sUrl)!
let queu = startOp {
getRequest(url: url){ response, data in
print("REACHED")
}
}
operationQueue.addOperation(queu)
operationQueue.waitUntilAllOperationsAreFinished()
One problem is that your operation is merely starting the request, but because the request is performed asynchronously, the operation is immediately completing, not actually waiting for the request to finish. You don't want to complete the operation until the asynchronous request is done.
If you want to do this with operation queues, the trick is that you must subclass Operation and perform the necessary KVO for isExecuting and isFinished. You then change isExecuting when you start the request and isFinished when you finish the request, with the associated KVO for both. This is all outlined in the Concurrency Programming Guide: Defining a Custom Operation Object, notably in the Configuring Operations for Concurrent Execution section. (Note, this guide is a little outdated (it refers to the isConcurrent property, which has been replaced is isAsynchronous; it's focusing on Objective-C; etc.), but it introduces you to the issues.
Anyway, This is an abstract class that I use to encapsulate all of this asynchronous operation silliness:
/// Asynchronous Operation base class
///
/// This class performs all of the necessary KVN of `isFinished` and
/// `isExecuting` for a concurrent `NSOperation` subclass. So, to developer
/// a concurrent NSOperation subclass, you instead subclass this class which:
///
/// - must override `main()` with the tasks that initiate the asynchronous task;
///
/// - must call `completeOperation()` function when the asynchronous task is done;
///
/// - optionally, periodically check `self.cancelled` status, performing any clean-up
/// necessary and then ensuring that `completeOperation()` is called; or
/// override `cancel` method, calling `super.cancel()` and then cleaning-up
/// and ensuring `completeOperation()` is called.
public class AsynchronousOperation : Operation {
override public var isAsynchronous: Bool { return true }
private let lock = NSLock()
private var _executing: Bool = false
override private(set) public var isExecuting: Bool {
get {
return lock.synchronize { _executing }
}
set {
willChangeValue(forKey: "isExecuting")
lock.synchronize { _executing = newValue }
didChangeValue(forKey: "isExecuting")
}
}
private var _finished: Bool = false
override private(set) public var isFinished: Bool {
get {
return lock.synchronize { _finished }
}
set {
willChangeValue(forKey: "isFinished")
lock.synchronize { _finished = newValue }
didChangeValue(forKey: "isFinished")
}
}
/// Complete the operation
///
/// This will result in the appropriate KVN of isFinished and isExecuting
public func completeOperation() {
if isExecuting {
isExecuting = false
isFinished = true
}
}
override public func start() {
if isCancelled {
isFinished = true
return
}
isExecuting = true
main()
}
}
And I use this Apple extension to NSLocking to make sure I synchronize the state changes in the above (theirs was an extension called withCriticalSection on NSLock, but this is a slightly more generalized rendition, working on anything that conforms to NSLocking and handles closures that throw errors):
extension NSLocking {
/// Perform closure within lock.
///
/// An extension to `NSLocking` to simplify executing critical code.
///
/// - parameter block: The closure to be performed.
func synchronize<T>(block: () throws -> T) rethrows -> T {
lock()
defer { unlock() }
return try block()
}
}
Then, I can create a NetworkOperation which uses that:
class NetworkOperation: AsynchronousOperation {
var task: URLSessionTask!
init(session: URLSession, url: URL, requestCompletionHandler: #escaping (Data?, URLResponse?, Error?) -> ()) {
super.init()
task = session.dataTask(with: url) { data, response, error in
requestCompletionHandler(data, response, error)
self.completeOperation()
}
}
override func main() {
task.resume()
}
override func cancel() {
task.cancel()
super.cancel()
}
}
Anyway, having done that, I can now create operations for network requests, e.g.:
let queue = OperationQueue()
queue.name = "com.domain.app.network"
let url = URL(string: "http://...")!
let operation = NetworkOperation(session: .shared, url: url) { data, response, error in
guard let data = data, error == nil else {
print("\(error)")
return
}
let string = String(data: data, encoding: .utf8)
print("\(string)")
// do something with `data` here
}
let operation2 = BlockOperation {
print("done")
}
operation2.addDependency(operation)
queue.addOperations([operation, operation2], waitUntilFinished: false) // if you're using command line app, you'd might use `true` for `waitUntilFinished`, but with standard Cocoa apps, you generally would not
Note, in the above example, I added a second operation that just printed something, making it dependent on the first operation, to illustrate that the first operation isn't completed until the network request is done.
Obviously, you would generally never use the waitUntilAllOperationsAreFinished of your original example, nor the waitUntilFinished option of addOperations in my example. But because you're dealing with a command line app that you don't want to exit until these requests are done, this pattern is acceptable. (I only mention this for the sake of future readers who are surprised by the free-wheeling use of waitUntilFinished, which is generally inadvisable.)