How to (asynchronously) consume a steaming end-point generated with servant's StreamGenerators? - http

The servant documentation describes how to create streaming endpoints:
type StreamAPI = "userStream" :> StreamGet NewlineFraming JSON (StreamGenerator User)
streamAPI :: Proxy StreamAPI
streamAPI = Proxy
streamUsers :: StreamGenerator User
Now the question is how can a client (written in javascript for instance) consume the end-point in an asynchronous fashion?

Note: that in newer servant (0.15 IIRC) streaming was refactored. However, the question of how to consume streaming endpoints in JavaScript is irrelevant of backend implementation.
For example if you use Fetch API, you can use reader API, which is well explained in MDN. Summarizing that:
// Fetch the original image
fetch('./tortoise.png')
// Retrieve its body as ReadableStream
.then(response => {
const reader = response.body.getReader();
Then you can call reader.read() to get response in parts.

Related

Encryption key retrieve on http call to the API?

Can you tell me if the following flow is good practice for retrieving encryption key?
So I have an Angular app which has custom encryption service created by me and a npm library, which now uses simple string value for key.
I also store access token inside a cookie and the value is encrypted.
Is it good practice, while executing a standard CRUD operation method against my API, first in this method to execute a separate http get request to get the encryption key from the API (from a separate endpoint), then de-crypt the cookie and THEN send it in the http request?
For example, look at this pseudo-code:
getAllProductsAsAdmin(): Observable<any>{
let encryptionKey = callApiAndGetKeyMethod(this.encryptionKeyApiUrl);
let decryptedCookie = decryptCookie(this.existingCookie, encryptionKey);
this.headers = this.headers.set('Authorization', decryptedCookie);
return this.httpClient.get<IGetProductAdminModel[]>(this.getAllProductsAsAdminUrl, {headers: this.headers})
.pipe(
tap(data => console.log('All:',JSON.stringify(data))),
catchError(this.handleError)
);

Sign in with Google - What should I do with `nonce`?

What I'm doing now:
Using the JavaScript API to render the button on my web page.
When the Sign in with Google flow is complete, my client-side JavaScript callback is called.
That callback sends the given .credentials string to my server.
The backend server (Node.js) calls the google-auth-library library's OAuth2Client.verifyIdtoken method on the .credentials string, which returns the user's email address (among other things), which my server uses to verify the user and create a session.
Everything works, but I'm wondering if there are any security concerns I'm missing. In particular there's a nonce field. The docs (link) don't explain how to use it.
Note: I'm using "Sign in with Google" and not the deprecated "Google Sign-In".
Edit: I'm familiar with the concept of nonces and have used them when doing the OAuth 2 server-side flow myself. What I can't figure out is how the Sign in with Google SDK expects me to use its nonce parameter with the flow above, where I'm using both their client-side and server-side SDKs.
Nonces are used as a CSRF-prevention method. When you make a request to Google, you include a nonce, and when authentication is complete, Google will send the same nonce back. The magic in this method is that if the nonce does not match what you sent then you can ignore the response, because it was probably spoofed.
Read more about CSRF here: https://owasp.org/www-community/attacks/csrf
Nonces are usually crytographically secure random strings/bytes.
I use crypto-random-string as a base to generate nonces, but any package with this functionality should suffice.
Sometimes I store nonces with a TTL in Redis, but other times I store nonces with an ID attached to the request so I can later verify it.
I'm telling you this since it took a bit long for me to figure out this nonce stuff :P
Using the example from Google's website (https://developers.google.com/identity/one-tap/android/idtoken-auth), I added the code for the nonce:
const nonce = '...'; // Supplied by client in addition to token
const {OAuth2Client} = require('google-auth-library');
const client = new OAuth2Client(CLIENT_ID);
async function verify() {
const ticket = await client.verifyIdToken({
idToken: token,
audience: CLIENT_ID, // Specify the CLIENT_ID of the app that accesses the backend
// Or, if multiple clients access the backend:
//[CLIENT_ID_1, CLIENT_ID_2, CLIENT_ID_3]
});
const payload = ticket.getPayload();
const serverNonce = payload['nonce'];
if (nonce != serverNonce) {
// Return an error
}
const userid = payload['sub'];
// If request specified a G Suite domain:
// const domain = payload['hd'];
}
verify().catch(console.error);

Using topic with request/response in masstransit

I'm using Masstransit dotnet core v6.3.1 with RabbitMQ v3. My case is sending request from api gateway to other services. Services consume by topics and Gateway using different topics per request. I'm trying to use request/response with masstransit. But requestClient declared exhange type to fanout. And I cant change the type. I want to use different routingKey per request with request/response. How can I do this?
I have used in gateway:
(startup.cs)
cfg.AddRequestClient<ISimpleRequest>();
(Custom Controller)
await client.GetResponse<ISimpleResponse>(new { Data="test request"});
I have used in other services(startup):
cfg.ReceiveEndpoint("TestGateway", ep =>
{
ep.Consumer(() => new SimpleConsumer(context));
});
(Custom Consumer)
await client.RespondAsync<ISimpleResponse>(new { Data="test response"});
Also I tried to declare exchange in rabbitmq first. After I created request from clientFactory with exchange Uri. But I had an error like " ...received 'fanout' but current is 'topic'."
There is a sample on using a direct exchange, topic exchanges are similar but support wildcard semantics. I'd suggest reviewing it to get more details on how to configure topology with RabbitMQ using MassTransit.
Sample
There is also documentation on how to setup routing keys with exchange types.

PDFKIT not starting streaming on http response till I call doc.end()

I am using pdfkit to generate pdf at runtime and returning this in http response for download. I am able to download the file at browser end but the download dialog is not opening immediately. Instead its waiting till doc.end is called. I guess pdfkit is unable to push the stream efficiently. Has anybody else faced this? If yes, please guide.
Here is the sample code which I am trying
exports.testPdfKit = functions.https.onRequest((request, response) => {
//create pdf document
doc.pipe(response);
response.set('Content-Disposition', `attachment;filename=testpdfstream.pdf`);
response.writeHead(200, { 'Content-Type': 'application/pdf' })
const bigText = "some big text"
for (var i = 0; i < 1000; i++) {
console.log('inside iteration -',i)
doc.text(bigText);
doc.addPage();
}
doc.end()
});
I am implementing this functionality on firebase functions which uses expressjs internally for processing http requests. To generate bigger files at my end, streaming is must for me.
HTTP functions can not stream the input or output of the function. The entire request is delivered in one chunk of memory, and the response is collection and send back to the client in one chunk. The maximum size of both is 10MB. There are not workarounds for this limitation of Cloud Functions (but it does help you system scale better).
If you need streaming or websockets, you'll need to use a different product, such as app engine or compute engine.

Encoded / Encrypted body before verifying a Pact

A server I need to integrate with returns its answers encoded as a JWT. Worse, the response body actually is a json, of the form:
{d: token} with token = JWT.encode({id: 123, field: "John", etc.})
I'd like to use a pact verification on the content of the decoded token. I know I can easily have a pact verifying that I get back a {d: string}, I can't do an exact match on the string (as the JWT contains some varying IDs). What I want is the following, which presumes the addition of a new Pact.JWT functionality.
my_provider.
upon_receiving('my request')
.with(method: :post,
path: '/order',
headers: {'Content-Type' => 'application/json'}
).will_respond_with(
status: 200,
headers: {'Content-Type' => 'application/json; charset=utf-8'},
body: {
d: Pact::JWT( {
id: Pact.like(123),
field: Pact.term(generate: "John", matcher: /J.*/
},signing_key,algo
)
})
Short of adding this Pact::JWT, is there a way to achive this kind of result?
I am already using the pact proxy to run my verification. I know you can modify the request before sending it for verification (How do I verify pacts against an API that requires an auth token?). Can you modify the request once you receive it from the proxy server?
If that's the case, I can plan for the following work around:
a switch in my actual code to sometimes expect the answers decoded instead of in the JWT
run my tests once with the swich off (normal code behaviour, mocks returns JWT data encoded.
run my tests a second time with the swich off (code expect data already decoded, mocks return decoded data.)
use the contract json from this second run
hook into the proxy:verify task to decode the JWT on the fly, and use the existing pact mechanisms for verification. (Step that I do not know how to do).
My code is in ruby. I do not have access to the provider.
Any suggestions appreciated! Thanks
You can modify the request or response by using (another) proxy app.
class ProxyApp
def initialize real_provider_app
#real_provider_app = real_provider_app
end
def call env
response = #real_provider_app.call(env)
# modify response here
response
end
end
Pact.service_provider "My Service Provider" do
app { ProxyApp.new(RealApp) }
end
Pact as a tool, I don't expect it to give this behavior out of the box.
In my opinion, the best is,
Do not change source code only for tests
Make sure your tests verifies encoded json only (generate encoded expected json in test & verify that with actual)

Resources