is it possible to load test an web application using cypress? - automated-tests

I have to login with 1k+ users to test the front end. And the other perspective is to test the app to check the endurance of the app that how app reacts to number of users?

I'd suggest changing your approach and using a load testing tool like K6 is the right tool for this job.
Start by:
Run a test.
Add virtual users.
Increase the test duration.
Ramp the number of requests up and down as the test runs.
import http from 'k6/http';
import { check, sleep } from 'k6';
export const options = {
stages: [
{ duration: '5m', target: 500 }, // simulate ramp-up of traffic from 1 to 500 users over 5 minutes.
{ duration: '10m', target: 500 }, // stay at 500 users for 10 minutes
{ duration: '5m', target: 0 }, // ramp-down to 0 users
],
};
export default function () {
const res = http.get('https://httpbin.test.k6.io/');
check(res, { 'status was 200': (r) => r.status == 200 });
sleep(1);
}

Related

NextJS Actioncable Proxy

So I'm trying to do two things at the same time and it's not going too well.
I have a NextJS app and a Rails API server this app connects to. For authentication I'm using a JWT token stored in an http-only encrypted cookie that the Rails API sets and the front end should not be touching. Naturally that creates a necessity for the frontend to send all the api requests though the NextJs server which proxies them to the real API.
To do that I have set up a next-http-proxy-middleware in my /pages/api/[...path] in the following way:
export const config = { api: { bodyParser: false, externalResolver: true } }
export default function handler(
req: NextApiRequest,
res: NextApiResponse
) {
httpProxyMiddleware(req, res, {
target: process.env.BACKEND_URL,
pathRewrite: [{ patternStr: "^/?api", replaceStr: "" }],
})
}
Which works great and life would be just great, but turns out I need to do the same thing with ActionCable subscriptions. Not to worry, found some handy tutorials, packed #rails/actioncable into my package list and off we go.
import {useCurrentUser} from "../../../data";
import {useEffect, useState} from "react";
const UserSocket = () => {
const { user } = useCurrentUser()
const [roomSocket, setRoomSocket] = useState<any>(null)
const loadConsumer = async () => {
// #ts-ignore
const { createConsumer } = await import("#rails/actioncable")
const newCable = createConsumer('/api/wsp')
console.log('Cable loaded')
setRoomSocket(newCable.subscriptions.create({
channel: 'RoomsChannel'
},{
connected: () => { console.log('Room Connected') },
received: (data: any) => { console.log(data) },
}))
return newCable
}
useEffect(() => {
if (typeof window !== 'undefined' && user?.id) {
console.log('Cable loading')
loadConsumer().then(() => {
console.log('Cable connected')
})
}
return () => { roomSocket?.disconnect() }
}, [typeof window, user?.id])
return <></>
}
export default UserSocket
Now when I go to load the page with that component, I get the log output all the way to Cable connected however I don't see the Room Connected part.
I tried looking at the requests made and for some reason I see 2 requests made to wsp. First is directed at the Rails backend (which means the proxy worked) but it lacks the Cookie headers and thus gets disconnected like this:
{
"type": "disconnect",
"reason": "unauthorized",
"reconnect": false
}
The second request is just shown as ws://localhost:5000/api/wsp (which is my NextJS dev server) with provisional headers and it just hangs up in pending. So neither actually connect properly to the websocket. But if I just replace the /api/wsp parameter with the actual hardcoded API address (ws://localhost:3000/wsp) it all works at once (that however would not work in production since those will be different domains).
Can anyone help me here? I might be missing something dead obvious but can't figure it out.

Playwright tests work locally but fail when run in "npx playwright test" in Azure DevOps Pipelines with "error page.goto: net::ERR_CONNECTION_RESET"

Context:
Playwright Version: Version 1.25.0
Operating System: windows-latest
Node.js version: v16.13.1
Browser: Firefox and Chrome
Extra: Run via Azure DevOps Pipelines in the cloud. The application login starts with the app domain, then goes to Azure login domain, then returns to original domain in the login, if it matters.
What is the goal? The goal is to include our Playwright test suite running in Azure DevOps Pipeline with a trigger.
The problem: The example test suite works locally with "npx playwright test" running all tests in separate files. But in Azure Pipelines, the example test suite works only with 1 or 2 tests, eg. "npx playwright test example.spec.ts". -> If we use "npx playwright test" and run all tests in azure-pipelines.yml, 1-2 test will pass and all others will fail with usually the login failing "page.goto: net::ERR_CONNECTION_RESET" (see errors below in Azure Pipelines log) or "page.goto: NS_ERROR_NET_RESET" with Firefox. I also sometimes "get page.goto: NS_ERROR_UNKNOWN_HOST"
What I have tried already: I've tried switching from ubuntu to windows in the .yml. I've tried running with only 1 worker and with fullyParallel: false. I also limited the browsers to Chrome only or Firefox only. I have tried to find a solution online, but I haven't encoutered this specific problem.
azure-pipelines.yml
trigger:
- tests/playwright
pool:
vmImage: 'windows-latest'
steps:
- script: echo Running azure-pipelines.yml...
displayName: 'Run a one-line script'
- task: NodeTool#0
inputs:
versionSpec: '16.x'
displayName: 'nodetool 16.x'
- task: Npm#1
inputs:
command: 'ci'
- task: CmdLine#2
inputs:
script: 'npx playwright install --with-deps'
- task: CmdLine#2
inputs:
script: 'set CI=true && echo %CI% && npx playwright test'
Azure Pipelines Job log with the errors
Retry #1 ---------------------------------------------------------------------------------------
page.goto: net::ERR_CONNECTION_RESET at https://mywebsite.com
=========================== logs ===========================
navigating to "https://mywebsite.com", waiting until "load"
============================================================
6 | console.log(`beforeEach initiated, running ${testInfo.title}`);
7 | const lg = new login(page);
> 8 | await lg.loginToAppWithAllLicenses();
| ^
9 | });
10 |
at loginToAppWithAllLicenses (D:\a\1\s\pages\login.ts:13:25)
7) [chromium] › example.spec.ts:17:5 › example test suite › Check that News and Messages is present
Test timeout of 60000ms exceeded while running "beforeEach" hook.
3 | import { login } from '../pages/login';
4 |
> 5 | test.beforeEach(async ({ page }, testInfo) => {
| ^
6 | console.log(`beforeEach initiated, running ${testInfo.title}`);
7 | const lg = new login(page);
8 | await lg.loginToAppWithAllLicenses();
page.click: Target closed
An example test that can fail
import { test, expect, Page } from '#playwright/test';
import { Utils } from '../pages/utils';
import { login } from '../pages/login';
test.beforeEach(async ({ page }, testInfo) => {
console.log(`beforeEach initiated, running ${testInfo.title}`);
const lg = new login(page);
await lg.loginToAppWithAllLicenses();
});
test.describe(example test suite', () => {
test("Check that News and Messages is present", async ({ page }) => {
await page.goto('https://mywebsite.com');
// Check that News and Messages are visible to assert that page has loaded
await expect(page.locator('ls-home-news >> text=News'))
.toHaveText('News');
await expect(page.locator('ls-home-messages >> text=Messages'))
.toHaveText('Messages');
});
});
The login that is performed in beforeEach
import { chromium, Page } from '#playwright/test';
export class login {
private page: Page;
constructor(page: Page) {
this.page = page;
}
async loginToAppWithAllLicenses() {
await this.page.goto('https://mywebsite.com');
// Click div[role="button"]:has-text("Email")
await Promise.all([
this.page.waitForNavigation(),
this.page.locator('div[role="button"]:has-text("Email")').click(),
]);
// Click [placeholder="Email Address"]
await this.page.click('[placeholder="Email Address"]');
await this.page.locator('[placeholder="Email Address"]').fill('email here..');
// Click [placeholder="Password"]
await this.page.click('[placeholder="Password"]');
await this.page.locator('[placeholder="Password"]').fill('password here..');
// Click button:has-text("Sign in")
await this.page.click('button:has-text("Sign in")');
// Select company
await this.page.click('.b-number-cell');
await this.page.waitForLoadState('networkidle');
}
}
playwright.config.ts
import type { PlaywrightTestConfig } from '#playwright/test';
import { devices } from '#playwright/test';
const config: PlaywrightTestConfig = {
testDir: './tests',
timeout: 60 * 1000,
expect: {
timeout: 5000
},
fullyParallel: true,
forbidOnly: !!process.env.CI,
retries: 1,
workers: process.env.CI ? 1 : undefined,
reporter: 'html',
use: {
actionTimeout: 0,
screenshot: 'only-on-failure',
trace: 'off',
},
projects: [
{
name: 'chromium',
use: {
...devices['Desktop Chrome'],
},
},
],
outputDir: 'test-results/',
};
export default config;
I finally managed to solve this by switching the test servers to Azure. It seems that at some point, the firewall of the server where the application was running, simply didn't want to communicate anymore.
So the problem wasn't really a Playwright-problem, but a network issue caused by the server's firewall settings.
After we moved the test application to Azure, we haven't faced such issues anymore, and the code didn't change.

LinkedIn API: upload video returns 500

According to the LinkedIn API documentation, I'm trying to push video. Unfortunately, I get a 500 error without any details when I'm running PUT request with the binary video file on the given endpoint from initialization request.
My video fit with video specifications.
Did I miss something ?
i have been in the same situation as you few days ago.
the solution is :
if your file is more then 4MB you must divide your file.
and in the initialize upload you will get a list of uploadUrls. so use each link with parts of file.
Thanks #ARGOUBI Sofien for your answer. I found my mistake: the fileSizeBytes value was wrong and gave me only one link for the upload. With the good value, I have several endpoints.
So, I'm initializing upload with that body:
{
"initializeUploadRequest": {
"owner": "urn:li:organization:MY_ID",
"fileSizeBytes": 10903312,
"uploadCaptions": false,
"uploadThumbnail": true
}
}
I got this response:
{
"value": {
"uploadUrlsExpireAt": 1657618793558,
"video": "urn:li:video:C4E10AQEDRhUsYL99HQ",
"uploadInstructions": [
{
"uploadUrl": "https://www.linkedin.com/dms-uploads/C4E10AQEDRhUsYL99HQ/uploadedVideo?sau=aHR0cHM6Ly93d3[...]t3yak1",
"lastByte": 4194303,
"firstByte": 0
},
{
"uploadUrl": "https://www.linkedin.com/dms-uploads/C4E10AQEDRhUsYL99HQ/uploadedVideo?sau=aHR0cHM6Ly93d3cub[...]f13yak1",
"lastByte": 8388607,
"firstByte": 4194304
},
{
"uploadUrl": "https://www.linkedin.com/dms-uploads/C4E10AQEDRhUsYL99HQ/uploadedVideo?sau=aHR0cHM6Ly93d3cubGlua2V[...]V3yak1",
"lastByte": 10903311,
"firstByte": 8388608
}
],
"uploadToken": "",
"thumbnailUploadUrl": "https://www.linkedin.com/dms-uploads/C4E10AQEDRhUsYL9[...]mF3yak1"
}
}
It's look like better ✌️
EDIT
After several tests, the upload is ok when I have only one upload links but I does not get any response from the server when I have several upload URL.
My code:
const uploadPromises: Array<() => Promise<AxiosResponse<void>>> = [];
uploadData.data.value.uploadInstructions.map((uploadInstruction: UploadInstructionType) => {
const bufferChunk: Buffer = videoStream.data.subarray(uploadInstruction.firstByte, uploadInstruction.lastByte + 1);
const func = async (): Promise<AxiosResponse<void>> => linkedinRestApiRepository.uploadMedia(uploadInstruction.uploadUrl, bufferChunk, videoContentType, videoContentLength);
uploadPromises.push(func);
});
let uploadVideoResponses: Array<AxiosResponse<void>>;
try {
uploadVideoResponses = await series(uploadPromises);
} catch (e) {
console.error(e);
}
Something is wrong we I have several upload links but I does not know what 😞
in my case i have divide my file buffer into arrayBuffer
then you can use map to upload each buffer with the right urlUpload

Setting PWA Service Worker File

I have a blog and I want to create a PWA version of it. I already created the manifest but I am a bit confused about the service worker settings. I have already created a service worker file but I am not sure if it will work fine. I used the guides on the Workbox website. I want my PWA to be capable of pre-cache, first network, cache the images/js/css/fonts etc., offline analytics, background, and periodic sync. Is my code ok for these features?
Here is my service worker code:
// Yapi Tasarim Akademisi Service Worker
importScripts('https://storage.googleapis.com/workbox-cdn/releases/5.1.2/workbox-sw.js');
workbox.setConfig({
debug: true,
});
import {
pageCache,
imageCache,
staticResourceCache,
googleFontsCache,
offlineFallback,
} from 'workbox-recipes';
pageCache();
googleFontsCache();
staticResourceCache();
imageCache();
offlineFallback();
// Background sync
import {BackgroundSyncPlugin} from 'workbox-background-sync';
import {registerRoute} from 'workbox-routing';
import {NetworkOnly} from 'workbox-strategies';
const bgSyncPlugin = new BackgroundSyncPlugin('myQueueName', {
maxRetentionTime: 24 * 60 // Retry for max of 24 Hours (specified in minutes)
});
registerRoute(
/\/api\/.*\/*.json/,
new NetworkOnly({
plugins: [bgSyncPlugin]
}),
'POST'
);
const statusPlugin = {
fetchDidSucceed: ({response}) => {
if (response.status >= 500) {
// Throwing anything here will trigger fetchDidFail.
throw new Error('Server error.');
}
// If it's not 5xx, use the response as-is.
return response;
},
};
// Add statusPlugin to the plugins array in your strategy.
// Offline Analytics
import * as googleAnalytics from 'workbox-google-analytics';
googleAnalytics.initialize();
// Pre-cache
import {precacheAndRoute} from 'workbox-precaching';
precacheAndRoute([
{url: '/index.html', revision: '383676' },
{url: '/styles/app.0c9a31.css', revision: null},
{url: '/scripts/app.0d5770.js', revision: null},
// ... other entries ...
]);

DynamoDB provisioned Read/Write Capacity Units exceeded unexpectedly

I run a program that sends data to dynamodb using api gateway and lambdas.
All the data sent to the db is small, and only sent from about 200 machines.
I'm still using free tier and sometimes unexpectedly in the middle of the month I'm start getting an higher provisioned read / write capacity and then from this day I pay a constant amount each day until the end of the month.
Can someone understand from the image below what happened in the 03/13 that caused this pike in the charts and caused these provisioned to rise from 50 to 65?
I can't tell what happened based on those charts alone, but some things to consider:
You may not be aware of the new "PAY_PER_REQUEST" billing mode option for DynamoDB tables which allows you to mostly forget about manually provisioning your throughput capacity: https://aws.amazon.com/blogs/aws/amazon-dynamodb-on-demand-no-capacity-planning-and-pay-per-request-pricing/
Also, might not make sense for your use case, but for free tier projects I've found it useful to proxy all writes to DynamoDB through an SQS queue (use the queue as an event source for a Lambda with a reserved concurrency that is compatible with your provisioned throughput). This is easy if your project is reasonably event-driven, i.e. build your DynamoDB request object/params, write to SQS, then have the next step be a Lambda that is triggered from the DynamoDB stream (so you aren't expecting a synchronous response from the write operation in the first Lambda). Like this:
Example serverless config for SQS-triggered Lambda:
dynamodb_proxy:
description: SQS event function to write to DynamoDB table '${self:custom.dynamodb_table_name}'
handler: handlers/dynamodb_proxy.handler
memorySize: 128
reservedConcurrency: 95 # see custom.dynamodb_active_write_capacity_units
environment:
DYNAMODB_TABLE_NAME: ${self:custom.dynamodb_table_name}
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:PutItem
Resource:
- Fn::GetAtt: [ DynamoDbTable, Arn ]
- Effect: Allow
Action:
- sqs:ReceiveMessage
- sqs:DeleteMessage
- sqs:GetQueueAttributes
Resource:
- Fn::GetAtt: [ DynamoDbProxySqsQueue, Arn ]
events:
- sqs:
batchSize: 1
arn:
Fn::GetAtt: [ DynamoDbProxySqsQueue, Arn ]
Example write to SQS:
await sqs.sendMessage({
MessageBody: JSON.stringify({
method: 'putItem',
params: {
TableName: DYNAMODB_TABLE_NAME,
Item: {
...attributes,
created_at: {
S: createdAt.toString(),
},
created_ts: {
N: createdAtTs.toString(),
},
},
...conditionExpression,
},
}),
QueueUrl: SQS_QUEUE_URL_DYNAMODB_PROXY,
}).promise();
SQS-triggered Lambda:
import retry from 'async-retry';
import { getEnv } from '../lib/common';
import { dynamodb } from '../lib/aws-clients';
const {
DYNAMODB_TABLE_NAME
} = process.env;
export const handler = async (event) => {
const message = JSON.parse(event.Records[0].body);
if (message.params.TableName !== env.DYNAMODB_TABLE_NAME) {
console.log(`DynamoDB proxy event table '${message.params.TableName}' does not match current table name '${env.DYNAMODB_TABLE_NAME}', skipping.`);
} else if (message.method === 'putItem') {
let attemptsTaken;
await retry(async (bail, attempt) => {
attemptsTaken = attempt;
try {
await dynamodb.putItem(message.params).promise();
} catch (err) {
if (err.code && err.code === 'ConditionalCheckFailedException') {
// expected exception
// if (message.params.ConditionExpression) {
// const conditionExpression = message.params.ConditionExpression;
// console.log(`ConditionalCheckFailed: ${conditionExpression}. Skipping.`);
// }
} else if (err.code && err.code === 'ProvisionedThroughputExceededException') {
// retry
throw err;
} else {
bail(err);
}
}
}, {
retries: 5,
randomize: true,
});
if (attemptsTaken > 1) {
console.log(`DynamoDB proxy event succeeded after ${attemptsTaken} attempts`);
}
} else {
console.log(`Unsupported method ${message.method}, skipping.`);
}
};

Resources