bitbucket API Rest - Getting branches from a repository - bitbucket-api

I am looking for the list of endpoints available for bitbucket API regarding get the branches from a repository and a specific branch.
I was expecting to see something like:
GET /2.0/repositories/{workspace}/{repo_slug}/branches/
GET /2.0/repositories/{workspace}/{repo_slug}/branches/{branch}
So I can get the commits from a specific branch. 
I know I can get commits but with this endpoint, its scope is under repository perspective. 
Do you know if there are endpoints to work with branches and recursively into its hierarchy?
I looked over the documentation for API 2.0 but I did not see what I was looking for so that is why I am posting this question here.
In addition, I see some time ago that was not possible according to this answer, but it belongs to the version 1.0 of the API. Is it still true? 

When hitting this endpoint:
https://api.bitbucket.org/2.0/repositories/<workspace>/<repository-name>`
# GET /2.0/repositories/{workspace}/{repo_slug}
You get as a result a JSON document. In the links attribute you got a key called branches. It is something like this:
{
"scm": "git",
"has_wiki": false,
"links": {
"watchers": {
"href": "https://api.bitbucket.org/2.0/repositories/{workspace}/{repo_slug}/watchers"
},
"branches": {
"href": "https://api.bitbucket.org/2.0/repositories/{workspace}/{repo_slug}/refs/branches"
},
....
So you can hit the endpoint and get the branches:
https://api.bitbucket.org/2.0/repositories/{workspace}/{repo_slug}/refs/branches
# GET /2.0/repositories/{workspace}/{repo_slug}/refs/branches
And get a specific branch with
https://api.bitbucket.org/2.0/repositories/{workspace}/{repo_slug}/refs/branches/<branch-name>
# GET /2.0/repositories/{workspace}/{repo_slug}/refs/branches/<branch-name>

Related

Issues while expanding URNs in versioned creatives response

We are migrating our APIs from unversioned to versioned, and having an issue while trying to get assets data from creatives endpoint.The response has reference to a post, but we are unable to use the expand URNs concept and get the inner media details of the Post URN. Is there a different approach we need to follow here?
I've read through all the migration documents and the response decoration document is also pointing to v2 endpoint and projection parameters, instead of using rest endpoint and fields parameter. Document reference.
Previous Request
GET -
https://api.linkedin.com/v2/adCreativesV2?ids[0]=181794673&projection=(results(*(variables(data(com.linkedin.ads.SponsoredVideoCreativeVariables(userGeneratedContentPost~(specificContent(com.linkedin.ugc.ShareContent(shareCommentary,media(*(media~:playableStreams(),title)))))))))))
This request gets us the media details of the creatives without making multiple calls.
Current Request
GET - https://api.linkedin.com/rest/creatives?ids=List(urn%3Ali%3AsponsoredCreative%3A181794673)&fields=(results(*(content(reference~($URN)))))
I am looking at the response I got from https://api.linkedin.com/rest/creatives?ids=List(urn%3Ali%3AsponsoredCreative%3A181794673) and trying to create the fields request. But no luck yet and getting the below error.
{
"status": 400,
"code": "ILLEGAL_ARGUMENT",
"message": "Invalid projection parameter: (results(*(content(reference~($URN)))))"
}
But when tried with projection in place of fields I got a response
{
"results": {
"urn:li:sponsoredCreative:181794673": {
"content": {
"reference": "urn:li:ugcPost:6905584391779950593",
"reference!": {
"message": "Not enough permissions to access deco: ugcPosts.BATCH_GET.20230101",
"status": 403
}
}
}
}
}
Can someone help me getting the data similar to how we got it before, without making external calls? Otherwise I think I have to be making calls to Creatives -> Posts -> Video, Image, Share etc endpoints

Windchill REST API endpoint to fill BOM from file

We are developing an internal project to use the Windchill OData REST API to fill the eBOM for a given part. What we are trying to do here is to read data from another software we have to get the BOM info and send it to the part in windchill. But we cannot find an endpoint in servlet/odata to do it.
We guess the idea is to replicate the manual process. So we already know how to create, check out and check in a part. However we still cannot find an endpoint to modify the part and add the eBOM.
We know PartList, PartListItem, GetPartStructure in the PTC Product Management Domain. But these are GET endpoints and are only useful to retrieve data, including the BOM. But we cannot use them to modify the content.
I've found the solution.
The endpoint to use is:
POST /ProdMgmt/Parts('VR:wt.part.WTPart:xxxxxxxxx')/Uses
The body of the request must contain:
{
"Quantity": 1,
"Unit": {
"Value": "ea",
"Display": "Each"
},
"TraceCode": {
"Value": "0",
"Display": "Untraced"
},
"Uses#odata.bind": "Parts('OR:wt.part.WTPart:yyyyyyyyy')"
}
Where Uses#odata.bind contains the ID of the part we want to link

How to list all active rooms in Janus with SFU plugin?

I'm trying to build a simple Aframe chat hub like Mozilla Hubs using networked-aframe with janus as adapter. I've installed everything in one server and everything is working fine.
However, I'm trying to set a limit for max amount of users to be connected in one 'room' because otherwise the browser might crash because of too many avatars to render, then redirect new users to connect to a new randomly generated room automatically.
Is there a way to do so by using the available Janus API? So far I've tried this Janus Signalling API for SFU plugin because it's the only reference that mention how to get how many users in one room, although not directly.
The example request body:
{
"kind": "join",
"room_id": room ID,
"user_id": user ID,
"subscribe": [none|subscription object]
}
The example result, but I think it should not be done like this to achieve what I want because I need list of ALL rooms, not just 1 room:
{
"success": true,
"response": {
"users": {room_alpha: ["123", "789"]}
}
}

Always get “Cannot parse non Measurement Protocol hits”

I have a little Python program that other people use and I would like to offer opt-in telemetry such that can get an idea of the usage patterns. Google Analytics 4 with the Measurement Protocol seems to be the thing that I want to use. I have created a new property and a new data stream.
I have tried to validate the request and set it to www.google-analytics.com/debug/mp/collect?measurement_id=G-LQDLGRLGZS&api_secret=JXGZ_CyvTt29ucNi9y0DkA via post and send this JSON payload:
{
"app_instance_id": "MyAppId",
"client_id": "TestClient.xx",
"events": [
{
"name": "login",
"params": {}
}
]
}
The response that I get is this:
{
"validationMessages": [
{
"description": "Cannot parse non Measurement Protocol hits.",
"validationCode": "INTERNAL_ERROR"
}
]
}
I seem to be doing exactly what they do in the documentation or tutorials. I must be doing something wrong, but I don't know, what is missing. What do I have to do in order to successfully validate the request?
Try to remove /debug part in the URL. In the example you followed it is not present so it is not quite exactly the same.
we just came across the same issue and the solution for us was to put https:// in front of the URL. Hope this helps.

Cosmos Db library Microsoft.Azure.DocumentDB.Core (2.1.0) - Actual REST invocations

We are attempting to Wiremock (https://github.com/WireMock-Net/WireMock.Net) CosmosDb invocations - so we can build integrationtests in our .net core 2.1 microservice.
By looking at the WireMock instance Request/Response entries, we can observe the following:
1) GET towards "/"
We mock the returning metadata of databases
THIS IS OK
2) GET towards collection (in our case: "/dbs/Chunker/colls/RHTMLChunks")
Returns metadata about the collections
THIS IS OK
3) POST a Query that results in one document being returned towards the documents endpoint on the collection (in our case: "/dbs/Chunker/colls/RHTMLChunks/docs")
I have tried to emulate what we get when we do the exact same query towards the CosmosDb instance in Postman, including headers and response.
However I observe that the lib does the query again, and again, and again....
(I can see this by pausing in Visual Studio, then look at the RequestLog in WireMock)
Does anyone know what should be returned. I have set up WireMock to return the following json payload:
{
"_rid": "q0dcAOelSAI=",
"Documents": [
{
"id": "gL20020621z2D34-1",
"ChunkSize": 658212,
"TotalChunks": 2,
"Metadata": {
"Active": true,
"PublishedDate": "",
},
"ChunkId": 1,
"Markup": "<h1>hello</h1>",
"MainDestination": "gL20020621z2D34",
"_rid": "q0dcAOelSAIHAAAAAAAAAA==",
"_self": "dbs/q0dcAA==/colls/q0dcAOelSAI=/docs/q0dcAOelSAIHAAAAAAAAAA==/",
"_etag": "\"0100e92a-0000-0000-0000-5ba96cf70000\"",
"_attachments": "attachments/",
"_ts": 1537830135
}
],
"_count": 0
}
Problems:
1) Can not find .pdb belonging to Microsoft.Azure.DocumentDB.Core v2.1.0
2) What payload/headers should be returned, so the library will NOT blow up, and retry when we invoke:
var response = await documentQuery.ExecuteNextAsync<DocumentDto>(); // this hangs forever
Please help :)
We're working on open sourcing the C# code base and some other fun improvements to make this easier. In the mean time, I'd advocate for using the emulator for local testing/etc., although I understand mocking is still a lot faster an nicer - it'll just be hard :)
My best pointer is actually our Node.js code base since that's public already. The query code is relatively hard to follow, but basically, you create a query, we look up all the partitions we need to talk to, then we send a request for each partition and keep querying until we don't get back a continuation token anymore (or maxBufferedItem Count/etc. goes over the limit, and we pause until goes back down, etc.)
Effectively, we send out N number of requests for each partition, where N is the number of pages of results, and can vary per partition and query. You'd likely be able to mock a single partition, single page response relatively easy, but a full partition response isn't gonna be fun.
As I mentioned in the beginning, we've got some cool stuff coming, hopefully before the end of the year, which will make offline mocking easier, as well as open sourcing it finally. You might be better off with the emulator until then.

Resources