I'm trying to get the repo names under a Project using the Bitbucket API. The current link on the documentation says to use
curl -u username:pwd http://${bitbucket-url}/rest/api/1.0/projects/${projectkey}/repos/
Response:
{
"size": 1,
"limit": 25,
"isLastPage": true,
"values": [
{
"slug": "my-repo",
"id": 1,
"name": "My repo",
"scmId": "git",
"state": "AVAILABLE",
"statusMessage": "Available",
"forkable": true,
"project": {
"key": "PRJ",
"id": 1,
"name": "My Cool Project",
"description": "The description for my cool project.",
"public": true,
"type": "NORMAL",
"links": {
"self": [
{
"href": "http://link/to/project"
}
]
}
},
"public": true,
"links": {
"clone": [
{
"href": "ssh://git#/PRJ/my-repo.git",
"name": "ssh"
},
{
"href": "https:///scm/PRJ/my-repo.git",
"name": "http"
}
],
"self": [
{
"href": "http://link/to/repository"
}
]
}
}
],
"start": 0
}
But I only need the repo name from the response
from subprocess import call
import configparser
import subprocess
import json
import os
base_dir = os.getcwd()
DETACHED_PROCESS = 0x00000008
cmd = 'curl --url "' + bb_url + '?pagelen=100&page=' + str(page) + '" --user ' + bb_user + ':' + bb_pwd + ' --request GET --header "Accept: application/json"'
output = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, creationflags=DETACHED_PROCESS).communicate()
datastore = json.loads(output[0].decode("utf-8"))
size = datastore.get("size")
values = datastore.get("values")
if(len(values)) == 0:
break
for repos in range(size):
repo_name = values[repos]["values"]["slug"]
f_initial = open (base_dir+"\\repositoryList.txt", "a+")
f_initial.write(repo_name)
f_initial.write("\n")
f_initial.close()
page = page + 1
This script will help you get the list of all the repositories in your project and write it under the file repositoryList.txt
With bash command
repoNamesJson=$(curl -D- -X GET -H "Authorization: Basic <encoded user pasword here>" -H "Content-Type: application/json" https://yourstash/rest/api/1.0/projects/ad/repos?limit=100000)
repoNames=$(echo $repoNamesJson | awk -v RS=',' '/{"slug":/ {print}' | sed -e 's/{"slug":/''/g' | sed -e 's/"/''/g')
echo $repoNames
With python-stash library
import stashy
bitbucket = stashy.connect("host", "username", "password")
projects = bitbucket.projects.list()
repos = bitbucket.repos.list()
for project in projects:
for repo in bitbucket.projects["%s" % (project["key"])].repos.list():
print(repo["name"])
print(repo["project"]['key'])
You can use BitBucket API partial responses in order to limit the fields returned by the API.
Taking excerpts from the doc page:
[...] use the fields query parameter.
The fields parameter supports 3 modes of operation:
Removal of select fields (e.g. -links)
Pulling in additional fields not normally returned by an endpoint, while still getting all the default fields (e.g. +reviewers)
Omitting all fields, except those specified (e.g. owner.display_name)
The fields parameter can contain a list of multiple comma-separated field names (e.g. fields=owner.display_name,uuid,links.self.href). The parameter itself is not repeated.
So in your case would be something like:
curl -u username:pwd
http://${bitbucket-url}/rest/api/1.0/projects/${projectkey}/repos?fields=values.slug
Though I must say that the JSON output is not flat, it will still retain its original structure:
{
"values": [
{
"slug": "your repo slug #1"
},
...
So, if you actually want only a list with each repo slug on its own line, there's still some leg work to do.
Related
I've a json structure as given below:
{"DocumentName":"es","DocumentId":"2","Content": [{"PageNo":1,"Text": "The full text queries enable you to search analyzed text fields such as the body of an email. The query string is processed using the same analyzer that was applied to the field during indexing."},{"PageNo":2,"Text": "The query string is processed using the same analyzer that was applied to the field during indexing."}]}
I need to get stemmed analyzed result for Content.Text field. For that I've created a mapping while creating index.It is given as below:
curl -X PUT "localhost:9200/myindex?pretty" -H "Content-Type: application/json" -d"{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "standard",
"filter": ["lowercase", "my_stemmer"]
}
},
"filter": {
"my_stemmer": {
"type": "stemmer",
"name": "english"
}
}
}
}
}, {
"mappings": {
"properties": {
"DocumentName": {
"type": "text"
},
"DocumentId": {
"type": "keyword"
},
"Content": {
"properties": {
"PageNo": {
"type": "integer"
},
"Text": "_all": {
"type": "text",
"analyzer": "my_analyzer",
"search_analyzer": "my_analyzer"
}
}
}
}
}
}
}"
I checked the analyzer created :
curl -X GET "localhost:9200/myindex/_analyze?pretty" -H "Content-Type: application/json" -d"{\"analyzer\":\"my_analyzer\",\"text\":\"indexing\"}"
and it gave the result:
{
"tokens" : [
{
"token" : "index",
"start_offset" : 0,
"end_offset" : 8,
"type" : "<ALPHANUM>",
"position" : 0
}
]
}
But after uploading the json into the index, when I tried searching "index" it is returning 0 results.
res = requests.get('http://localhost:9200')
es = Elasticsearch([{'host': 'localhost', 'port': '9200'}])
res= es.search(index='myindex', body={"query": {"match": {"Content.Text": "index"}}})
Any help would be much appreciated.Thank You in advance.
Ignore my comment. The stemmer is working. Try the following:
Mapping:
curl -X DELETE "localhost:9200/myindex"
curl -X PUT "localhost:9200/myindex?pretty" -H "Content-Type: application/json" -d'
{
"settings":{
"analysis":{
"analyzer":{
"english_exact":{
"tokenizer":"standard",
"filter":[
"lowercase"
]
}
}
}
},
"mappings":{
"properties":{
"DocumentName":{
"type":"text"
},
"DocumentId":{
"type":"keyword"
},
"Content":{
"properties":{
"PageNo":{
"type":"integer"
},
"Text":{
"type":"text",
"analyzer":"english",
"fields":{
"exact":{
"type":"text",
"analyzer":"english_exact"
}
}
}
}
}
}
}
}'
Data:
curl -XPOST "localhost:9200/myindex/_doc/1" -H "Content-Type: application/json" -d'
{
"DocumentName":"es",
"DocumentId":"2",
"Content":[
{
"PageNo":1,
"Text":"The full text queries enable you to search analyzed text fields such as the body of an email. The query string is processed using the same analyzer that was applied to the field during indexing."
},
{
"PageNo":2,
"Text":"The query string is processed using the same analyzer that was applied to the field during indexing."
}
]
}'
Query:
curl -XGET 'localhost:9200/myindex/_search?pretty' -H "Content-Type: application/json" -d '
{
"query":{
"simple_query_string":{
"fields":[
"Content.Text"
],
"query":"index"
}
}
}'
Exactly one document is returned - as expected. I've also tested the following stems, they all worked correctly with the proposed mapping: apply (applied), texts (text), use (using).
Python example:
import requests
from elasticsearch import Elasticsearch
res = requests.get('http://localhost:9200')
es = Elasticsearch([{'host': 'localhost', 'port': '9200'}])
res = es.search(index='myindex', body={"query": {"match": {"Content.Text": "index"}}})
print(res)
Tested on Elasticsearch 7.4.
I am trying to request data using the httr package. Following this format:
args <- list(metrics = c(list(name = "Jobs.2018",as = "Jobs 2018")),
constraints = list(dimensionName ="Area",
map = list("Latah County ID" = c(16057))))
test <- POST(url =
"https://agnitio.emsicloud.com/emsi.us.demographics/2018.3",
add_headers(`authorization` = paste("bearer",token)),
add_headers(`content-type` ="application/json"),
body = toJSON(args,auto_unbox = TRUE),
verbose())
I keep getting a 400 Bad Request error with everything I have looked up and tried. Do I need to add something to the arguments that I am just not finding?
P.S. I am sorry that this isn't a repeatable example
We'll assume (a bad thing but necessary for an answer) that you obtained token by issuing a prior POST request as indicated on the linked API page and then properly decoded the JSON web token into token.
If you did that properly, then one next likely possibility is malformed body data in POST request.
When I look at a sample API call:
curl --request POST \
--url https://agnitio.emsicloud.com/emsi.us.industry/2018.3 \
--header 'authorization: bearer <access_token>' \
--header 'content-type: application/json' \
--data '{ "metrics": [ { "name": "Jobs.2017", "as":"2017 Jobs" }, { "name": "Establishments.2017" } ], "constraints": [ { "dimensionName": "Area", "map": { "Latah County, ID": ["16057"] } }, { "dimensionName": "Industry", "map": { "Full Service Restaurant s": ["722511"] } } ] }'
that sample JSON looks like this pretty-printed:
{
"metrics": [
{
"name": "Jobs.2017",
"as": "2017 Jobs"
},
{
"name": "Establishments.2017"
}
],
"constraints": [
{
"dimensionName": "Area",
"map": {
"Latah County, ID": [
"16057"
]
}
},
{
"dimensionName": "Industry",
"map": {
"Full Service Restaurants": [
"722511"
]
}
}
]
}
Yours looks like:
{
"metrics": {
"name": "Jobs.2018",
"as": "Jobs 2018"
},
"constraints": {
"dimensionName": "Area",
"map": {
"Latah County ID": 16057
}
}
}
when it needs to look more like this:
{
"metrics": [
{
"name": "Jobs.2018",
"as": "Jobs 2018"
}
],
"constraints": [
{
"dimensionName": "Area",
"map": {
"Latah County ID": [
"16057"
]
}
}
]
}
To do that, we need to use this list structure:
list(
metrics = list(
list(
name = jsonlite::unbox("Jobs.2018"),
as = jsonlite::unbox("Jobs 2018")
)),
constraints = list(list(
dimensionName = jsonlite::unbox("Area"),
map = list("Latah County ID" = c("16057"))
))
) -> args
Note especially that the API expects that map ID JSON data element to be character and not integer/numeric.
Now, we can make the POST request like this (spaced out for answer readability as it has embedded comments):
httr::POST(
url = "https://agnitio.emsicloud.com/emsi.us.demographics/2018.3",
httr::add_headers(
`authorization` = sprintf("bearer %s", token)
),
encode = "json", # this lets' httr do the work for you
httr::content_type_json(), # easier than making a header yourself
body = args,
httr::verbose()
) -> res
That should work but b/c it's a closed API without free registration I cannot test it.
I have successfully tested the first few intents of my app with my webhook in the DialogFlow console, but testing in the Simulator gives the following error:
UnparseableJsonResponse API Version 2: Failed to parse JSON response
string with 'INVALID_ARGUMENT' error: ": Cannot find field.".
NB!!!
The first thing to notice is that it refers to "API Version 2".
No requests are reaching my webhook - so it appears that this is all within Google.
Using Chrome Developer Tools, I see the Network entry that gets this error response - some details below:
Request url from Simulator: https://assistant.clients6.google.com/v1/assistant:converse?alt=json&key=A.....
NB!!!
(Note it says 'v1')
Request Payload:
{"conversationToken":"","debugLevel":1,"inputType":"KEYBOARD","locale":"en-US","mockLocation":{"city":"Mountain View","coordinates":{"latitude":37.421980615353675,"longitude":-122.08419799804688},"formattedAddress":"Googleplex, Mountain View, CA 94043, United States","zipCode":"94043"},"query":"Talk to ","surface":"PHONE"}
Response:
{
"response": "Connect the docs isn't responding right now. Try again soon.",
"conversationToken": "GidzaW11bGF0b3JfZGV2aWNlXzM3MTQxRERFM0I0Nzk1Q0ZfMDAwMDA=",
"audioResponse": "//NExAAR... encoded audio ...",
"debugInfo": {
"assistantToAgentDebug": {
"curlCommand": "curl -v 'https://api.api.ai/api/integrations/google?token=...token...' -H 'Content-Type: application/json;charset=UTF-8' -H 'Google-Actions-API-Version: 2' -H 'Authorization: ...authorization key...' -A 'Mozilla/5.0 (compatible; Google-Cloud-Functions/2.1; +http://www.google.com/bot.html)' -X POST -d '{\"user\":{\"userId\":\"...user id...\",\"locale\":\"en-US\",\"lastSeen\":\"2017-12-15T17:22:55Z\"},\"conversation\":{\"conversationId\":\"1513778713541\",\"type\":\"NEW\"},\"inputs\":[{\"intent\":\"actions.intent.MAIN\",\"rawInputs\":[{\"inputType\":\"KEYBOARD\",\"query\":\"Talk to Connect The Docs\"}]}],\"surface\":{\"capabilities\":[{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.SCREEN_OUTPUT\"}]},\"isInSandbox\":true,\"availableSurfaces\":[{\"capabilities\":[{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.SCREEN_OUTPUT\"}]}]}'",
"assistantToAgentJson": "{\"user\":{\"userId\":\"...user id...\",\"locale\":\"en-US\",\"lastSeen\":\"2017-12-15T17:22:55Z\"},\"conversation\":{\"conversationId\":\"1513778713541\",\"type\":\"NEW\"},\"inputs\":[{\"intent\":\"actions.intent.MAIN\",\"rawInputs\":[{\"inputType\":\"KEYBOARD\",\"query\":\"Talk to Connect The Docs\"}]}],\"surface\":{\"capabilities\":[{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.SCREEN_OUTPUT\"}]},\"isInSandbox\":true,\"availableSurfaces\":[{\"capabilities\":[{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.SCREEN_OUTPUT\"}]}]}"
},
"agentToAssistantDebug": {
"agentToAssistantJson": "{\"message\":\"Unexpected apiai response format: Empty speech response\",\"apiResponse\":{\"id\":\"24ddbf1c-3930-40c6-ba50-03c0935cd1d0\",\"timestamp\":\"2017-12-20T14:05:13.766Z\",\"lang\":\"en-us\",\"result\":{},\"status\":{\"code\":200,\"errorType\":\"success\"},\"sessionId\":\"1513778713541\"}}"
},
"sharedDebugInfo": [{
"name": "ResponseValidation",
"subDebugEntry": [{
"name": "UnparseableJsonResponse",
"debugInfo": "API Version 2: Failed to parse JSON response string with 'INVALID_ARGUMENT' error: \": Cannot find field.\"."
}]
}]
},
"visualResponse": {}
}
I have been informed by Google Support that I am indeed using version V2 - I initiated this in December 2017 - long after the May 2017 "cutoff date" where V2 is supposed to be the default.
Is this a Google bug? Have I missed something setting up my intents? Or is there another setting that may be causing this?
I see that other posts in the DialogFlow forum show the same problem.
Any help is appreciated.
Added on 1/9/2018:
Contents of the Debug Tab:
{
"agentToAssistantDebug": {
"agentToAssistantJson": {
"message": "Unexpected apiai response format: Empty speech response",
"apiResponse": {
"id": "64a900d2-23e8-4833-b9de-0b207f63bffc",
"timestamp": "2018-01-08T21:08:36.821Z",
"lang": "en-us",
"result": {},
"status": {
"code": 200,
"errorType": "success"
},
"sessionId": "1515445716570"
}
}
},
"assistantToAgentDebug": {
"assistantToAgentJson": {
"user": {
"userId": "ABwppHFGoTJm5fKpau6WWwufKQE5UwkebooowZF7YhvD7PPY-hUfxU2_KRpB0LLNcLPyXasbXnRxXT6fniKk",
"locale": "en-US",
"lastSeen": "2018-01-05T15:53:11Z"
},
"conversation": {
"conversationId": "1515445716570",
"type": "NEW"
},
"inputs": [
{
"intent": "actions.intent.MAIN",
"rawInputs": [
{
"inputType": "VOICE",
"query": "talk to connect the docs"
}
]
}
],
"surface": {
"capabilities": [
{
"name": "actions.capability.AUDIO_OUTPUT"
},
{
"name": "actions.capability.SCREEN_OUTPUT"
},
{
"name": "actions.capability.WEB_BROWSER"
}
]
},
"isInSandbox": true,
"availableSurfaces": [
{
"capabilities": [
{
"name": "actions.capability.AUDIO_OUTPUT"
},
{
"name": "actions.capability.SCREEN_OUTPUT"
}
]
}
]
},
"curlCommand": "curl -v 'https://api.api.ai/api/integrations/google?token=0ffc8bcf72704850a4b4139d49a8d72e' -H 'Content-Type: application/json;charset=UTF-8' -H 'Google-Actions-API-Version: 2' -H 'Authorization: eyJhbGciOiJSUzI1NiIsImtpZCI6IjBhYTQ1NDFlNGM4ZWVhODQ0NjhmZTYxYTkzZmIxYzA2MzJkYjVhMGYifQ.eyJhdWQiOiJhY3RpdmUtZG9jdW1lbnQiLCJpc3MiOiJodHRwczovL2FjY291bnRzLmdvb2dsZS5jb20iLCJqdGkiOiIwY2U2OTdlNmE3NGFiZmVmZTdiYzhmMGU2ZGJlMzEyMDFjOWU3MzA5IiwiaWF0IjoxNTE1NDQ1NzE2LCJleHAiOjE1MTU0NDU4MzYsIm5iZiI6MTUxNTQ0NTQxNn0.hZNpVrH4o8ObGIvZ7BQV44nymekTWR_K4_jsDKCzgj74z57IDyUXNGEZs6KUFxBM_2FXiSoOxJUQZ1OhDRpkpQ6L4LELYN_JDhly7kgy-SLgKgLG6FZ4YV-8qOgr9Uxmr9SsG6NSXdiG7HvTrHLXIwA8K2siBNGGDWAIB691gAC8qsjsq4d3VnHMTeqlJ6mDoOtZ2xdLnJbK5B-OK-rLHEhX6K1-Z7rXQL3OgSwUtRVvYfHI3jqY83Xn3-uf06izkQhwVqH-W6X1REltrlxFTPW2h72D-st-QQ9euIpK3fn0x-z3ouQ17g-rGrPjKcOop9FejtKMT1tibxSkQ7qywQ' -A 'Mozilla/5.0 (compatible; Google-Cloud-Functions/2.1; +http://www.google.com/bot.html)' -X POST -d '{\"user\":{\"userId\":\"ABwppHFGoTJm5fKpau6WWwufKQE5UwkebooowZF7YhvD7PPY-hUfxU2_KRpB0LLNcLPyXasbXnRxXT6fniKk\",\"locale\":\"en-US\",\"lastSeen\":\"2018-01-05T15:53:11Z\"},\"conversation\":{\"conversationId\":\"1515445716570\",\"type\":\"NEW\"},\"inputs\":[{\"intent\":\"actions.intent.MAIN\",\"rawInputs\":[{\"inputType\":\"VOICE\",\"query\":\"talk to connect the docs\"}]}],\"surface\":{\"capabilities\":[{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.WEB_BROWSER\"}]},\"isInSandbox\":true,\"availableSurfaces\":[{\"capabilities\":[{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.SCREEN_OUTPUT\"}]}]}'"
},
"sharedDebugInfo": [
{
"name": "ResponseValidation",
"subDebugEntry": [
{
"debugInfo": "API Version 2: Failed to parse JSON response string with 'INVALID_ARGUMENT' error: \": Cannot find field.\".",
"name": "UnparseableJsonResponse"
}
]
}
]
}
Contents of the Validation Errors Tab:
UnparseableJsonResponse
API Version 2: Failed to parse JSON response string with
'INVALID_ARGUMENT' error: ": Cannot find field.".
Screenshot of welcome intent added 1/10/2018:
The problem is a combination of two things:
There are no text replies set in the Response section.
When the Intent is triggered, it does not get sent to a webhook.
As a result, Dialogflow replies to the Assistant with no text response, which is an error.
You can correct this by making sure your welcome intent does one of the following (you don't have to do both):
Set one or more text replies. These would be sent back when the Intent is called.
Check the Use webhook box under Fulfillment. This would call your webhook when the Intent is triggered. (And then make sure that your webhook returns a valid response.)
As you speculated in your comments, you could also change the Welcome Intent to one of your other Intents that you've already tested to respond. There is nothing special about this particular Welcome Intent - it was just the one created by default for you.
Is there a way to download an artifact using AQL?
I have a query sent with:
curl -u user:pass \
-X POST https://artifactoryURL/artifactory/api/search/aql \
-H 'content-type: text/plain' \
-T query.aql
And my query.aql:
items.find(
{
"repo":{"$eq":"repo"},
"$and": [
{
"path": { "$match": "path/*"},
"name": { "$match": "*.rpm"}
}
]
}
)
.sort({ "$desc": ["modified"] })
.limit(1)
Now that I know it returns what I want, Is there a way to change the request from api/search/aql to something like api/download/aql and get those items?
EDIT:
I had also tried doing this with the jfrog cli but they don't fully support AQL there (sort and limit didn't work).
This is the command I tried:
jfrog rt s --spec=query-aql.json
And the spec that failed to sort and limit results:
{
"files": [
{
"aql": {
"items.find": {
"repo": "repo",
"$and": [
{
"path": { "$match": "path/*"},
"name": { "$match": "*.rpm"}
}
]
}
},
"sort": {
"$asc": ["modified"]
},
"limit": 1
}
]
}
EDIT 2:
Added a jfrog-cli-go issue: https://github.com/JFrogDev/jfrog-cli-go/issues/56
An easy way to use your aql script to download files from artifactory is to use the JFrog cli as mention here : https://www.jfrog.com/confluence/display/CLI/CLI+for+JFrog+Artifactory#CLIforJFrogArtifactory-Download,CopyandMoveCommandsSpecSchema
The cli can be downloaded as an executable for linux, mac or windows and should fit your needs
With the curl command the only thing you can do is then to parse the result from your aql query and perform download request for each file.
I was just looking for a very similar thing -- use a download spec file to download the latest artifact from a given repo and path. I don't care if I use AQL or not, I just want it in the download-spec.json file. If you go to the link above look at Example 5.
Modified for your example:
{
"files": [
{
"pattern": "path/*.rpm",
"target": "my/download/path/",
"sortBy": "created",
"sortOrder": "desc",
"limit": 1,
}
]
}
jfrog cli supports --limit, --sort-order, and --sort-by arguments.
The following works for me:
jfrog rt search --spec=/tmp/jfrogfilespec.json --sort-by created --sort-order=desc --limit 1
The contents of the json spec file are:
{ "files": [ { "aql": { "items.find": { "repo":{"$eq":"my-release-repo"}, "name":{"$match":"my-artifact-prefix*"} } } } ] }
This generates the following query (according to debug mode):
items.find( { "repo":{"$eq":"my-release-repo"}, "name":{"$match":"my-artifact-prefix*"} } ).include("name","repo","path","actual_md5","actual_sha1","size","type","created").sort({"$desc":["created"]}).limit(1)
What is frustrating is that I cannot seem to find a way to use "jfrog rt search" with a filespec that allows me to influence the "include" modifier portion of the search.
I used below method to get Auth-token and i got the output as below.But if use that id as Auth-token for PUT method it's not working it's showing as explained below.
curl -sd '{"auth":{"passwordCredentials":{"username": "admin", "password": "password"}}}' -H "Content-type: application/json" http://169.0.0.11:5000/v2.0/tokens | python -m json.tool
{
"access": {
"metadata": {
"is_admin": 0,
"roles": []
},
"serviceCatalog": [],
"token": {
"audit_ids": [
"GgpxHyihQVyuI1ryerQZVw"
],
"expires": "2016-08-15T16:11:49Z",
"id": "bcced26a96304e8197fa85e110df9aa2",
"issued_at": "2016-08-15T15:11:49.386446"
},
"user": {
"id": "a5064af3b125449a9a09e9b69966f843",
"name": "admin",
"roles": [],
"roles_links": [],
"username": "admin"
}
}
}
curl -i -X PUT "X-Auth-Token:bcced26a96304e8197fa85e110df9aa2" http://169.0.0.11/dashboard/project/containers/test/mymusic/
but its saying that Could not resolve host: X-AUTH-TOKEN HTTP/1.1 301 MOVED PERMANENTLY
can any one help me to solve this problem please.
I believe you have missed to add -H for Header, rest looks fine.
curl -i -X PUT -H "X-Auth-Token:bcced26a96304e8197fa85e110df9aa2" http://169.0.0.11/dashboard/project/containers/test/mymusic/