I use CoffeeScript [CS] /heavily/ in my meteor sources. In fact, everything in my project is written using CS. I want to write packages by the same token. How should they be organized, declared, and written so they leverage the power of the CS dialect while maximizing testability and portability?
In short, you need only api.use('coffeescript'); in your Package.onUse and Package.onTest in order to write your packages in CoffeeScript. See the docs for an outline of the namespacing quirks.
Here's a simple example of a package called safe which contains the following four files:
package.js
Package.describe({
name: 'safe',
summary: 'Encrypt strings to keep them safe (or not)'
});
Package.onUse(function(api) {
api.versionsFrom('1.1.0.3');
api.export('Safe');
api.use('coffeescript');
api.addFiles('encrypt.coffee');
api.addFiles('safe.coffee');
});
Package.onTest(function(api) {
api.use('tinytest');
api.use('safe');
api.use('coffeescript');
api.addFiles('tests.coffee');
});
encrypt.coffee
# use the share object to export code to other files in the package
share.encrypt = (string) ->
# a super strong encryption :)
string.replace /[a-zA-Z]/g, (c) ->
String.fromCharCode (if ((if c <= "Z" then 90 else 122)) >= (c = c.charCodeAt(0) + 13) then c else c - 26)
safe.coffee
{encrypt} = share
class Safe
constructor: (#string) ->
encrypt: ->
encrypt #string
tests.coffee
Tinytest.add 'safe encryption', (test) ->
safe = new Safe 'pandapants'
test.equal safe.encrypt(), 'cnaqncnagf'
This should give you a template to start from. If you need additional clarification, just ask in the comments and I'll update the answer as needed.
Related
I have been using Postman so much and built a lot of useful things in it, but now need to implement one more thing.
Briefly:
Need to create test cases to test a correct records counting for every institution.
Now I handled it like that
Structure:
---------Collection
-------Folder
---Districts(folder)
-some folders with tests
---Colleges(folder)
-some folders with tests
---Schools(folder)
-Principal(Folder with test in Schools folder)
where these requests:
POST: Create a list
where in Tests:
var jList = JSON.parse(responseBody);
postman.setEnvironmentVariable("list_id", jList.data.id);
POST: Add some filters to list
GET: lists/{{list_id}} whete in "Test" code:
```
var allLists = JSON.parse(responseBody);
pm.test("test count", function () {
const value = allLists.data.count;
pm.expect(typeof value === 'number').to.eql(true);
pm.expect(value > 0 && value < 999999).to.eql(true);
}); // I check if the number(made in stepâ„–2 in a range we need
```
DELETE: delete this list
I guess I do 'overhead work' adding POST(create a list) and -DELETE endpoints in every folder, can I somehow bring it out to environment or variable and execute it before every POST: Add some filters to list and -DELETE endpoint after that
Something like general BeforeEach and AfterEach
Perhaps can I even bring this(below) out in one place for every separate test?
pm.test("test count", function () {
const value = allLists.data.count;
pm.expect(typeof value === 'number').to.eql(true);
pm.expect(value > 0 && value < 999999).to.eql(true);
});
Here is an example how it looks like if it not difficult, give me a piece of advice! Thanks
I'm building complications for a nutrition tracking app. I'd like to use offer multiple smaller complications, so the user can track their nutrition.
EG:
'MyApp - Carbohydrates'
'MyApp - Protein'
'MyApp - Fat'
this way on the Modular watch face, they could track all three by using the three bottom 'modular small' complications.
I'm aware this can be achieved by only offering larger sizes that can display everything at once (eg the 'modular large' complication), but I'd like to offer the user choice about how they set up their watch face.
I can't see a way to offer multiple of the same complication, is there any way around this?
The previous answer is outdated. WatchOS 7 onwards, we can now add multiple complications to the same complication family for our app.
Step 1:
In our ComplicationController.swift file, we can make use of the getComplicationDescriptors function, which allows us to describe what complications we are making available in our app.
In the descriptors array, we can append one CLKComplicationDescriptor() for each kind of complication per family that we want to build.
func getComplicationDescriptors(
handler: #escaping ([CLKComplicationDescriptor]) -> Void) {
var descriptors: [CLKComplicationDescriptor] = []
for progressType in dataController.getProgressTypes() {
var dataDict = Dictionary<AnyHashable, Any>()
dataDict = ["id": progressType.id]
// userInfo helps us know which type of complication was interacted with by the user
let userActivity = NSUserActivity(
activityType: "org.example.foo")
userActivity.userInfo = dataDict
descriptors.append(
CLKComplicationDescriptor(
identifier: "\(progressType.id)",
displayName: "\(progressType.title)",
supportedFamilies: CLKComplicationFamily.allCases, // you can replace CLKComplicationFamily.allCases with an array of complication families you wish to support
userActivity: userActivity)
)
}
handler(descriptors)
}
The app will now have multiple complications (equal to the length of the dataController.getProgressTypes() array) for each complication family that you support.
But how do you now display different data and views for different complications?
Step 2:
In the getCurrentTimelineEntries and getTimelineEntries functions, we can then make use of the complication.identifier value to identify the data that was passed along when this complication entry was called for.
Example, in the getTimelineEntries function:
func getTimelineEntries(for complication: CLKComplication, after date: Date, limit: Int, withHandler handler: #escaping ([CLKComplicationTimelineEntry]?) -> Void) {
// Call the handler with the timeline entries after the given date
var entries: [CLKComplicationTimelineEntry] = []
...
...
...
var next: ProgressDetails
// Find the progressType to show using the complication identifier
if let progressType = dataController.getProgressAt(date: current).first(where: {$0.id == complication.identifier}) {
next = progressType
} else {
next = dataController.getProgressAt(date: current)[0] // Default to the first progressType
}
let template = makeTemplate(for: next, complication: complication)
let entry = CLKComplicationTimelineEntry(
date: current,
complicationTemplate: template)
entries.append(entry)
...
...
...
handler(entries)
}
You can similarly find the data that is passed in the getCurrentTimelineEntry and the getLocalizableSampleTemplate functions.
Step 3:
Enjoy!
There is currently no way to create multiple complications in the same family (e.g. Modular Small, Modular Large, Utilitarian Small etc.).
You could offer a way for the user to customize each complication to display Carbohydrates, Protein and Fat. You could even display different data for each complication family, but as far as having, for example, 2 modular small complications displaying different data, it is not possible yet.
You can see this if you put 2 of the same built in Apple complications in different places on your watchface they display the same thing. If Apple isn't even doing it with their own complications then it is more than likely impossible. Hope that explanation helps.
I use a few different triplestores, and code in R and Scala. I think I'm seeing some differences in:
whether the triplestores include triples other than the ones I
explicitly loaded.
the point at which these "background" triples might be added.
Are there any general rules for whether supporting vocabularies need to be added, independent of the implementation technology?
Using Jena in R, via rrdf, I usually only see what I loaded:
library(rrdf)
turtle.input.string <-
"PREFIX prefix: <http://example.com/>
prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
prefix:subject rdf:type prefix:object"
jena.model <-
fromString.rdf(rdfContent = turtle.input.string, format = "TURTLE")
model.string <- asString.rdf(jena.model, format = "TURTLE")
cat(model.string)
This gives:
#prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
#prefix prefix: <http://example.com/> .
prefix:subject a prefix:object .
But sometimes triples from RDF and RDFS seems to appear when I add or remove triples afterwards. That's what "bothers" me the most, but I'm having trouble finding an example right now. If nobody knows what I mean, I'll dig something up later today.
When I use Blazegraph in Scala, via the OpenRDF Sesame library, I think I always get RDF, RDFS, and OWL "for free"
import java.util.Properties
import org.openrdf.query.QueryLanguage
import org.openrdf.rio._
import com.bigdata.journal._
import com.bigdata.rdf.sail._
object InjectionTest {
val jnl_fn = "sparql_tests.jnl"
def main(args: Array[String]): Unit = {
val props = new Properties()
props.put(Options.BUFFER_MODE, BufferMode.DiskRW)
props.put(Options.FILE, jnl_fn)
val sail = new BigdataSail(props)
val repo = new BigdataSailRepository(sail)
repo.initialize()
val cxn = repo.getConnection()
val resultStream = new java.io.ByteArrayOutputStream
val resultWriter = Rio.createWriter(RDFFormat.TURTLE, resultStream)
val ConstructString = "construct {?s ?p ?o} where {?s ?p ?o}"
cxn.prepareGraphQuery(QueryLanguage.SPARQL, ConstructString).evaluate(resultWriter)
var resString = resultStream.toString()
println(resString)
}
}
Even without adding any triples, the construct output includes blocks like this:
rdfs:isDefinedBy rdfs:domain rdfs:Resource ;
rdfs:range rdfs:Resource ;
rdfs:subPropertyOf rdfs:isDefinedBy , rdfs:seeAlso .
Are there any general rules for whether supporting vocabularies need to be added, independent of the implementation technology?
That depends on what inferencing scheme your triplestore claims to support. For a pure RDF store (no inferencing), no additional triples should be added at all.
Judging from that fragment you showed, the Blazegraph store you used has at least RDFS inferencing (and possibly partial OWL reasoning as well?) enabled. Note that this is store-specific, not framework, so it's not a Jena vs. Sesame thing: both frameworks support stores that either do or do not do reasoning. Of course, if you use either framework and use the "excluded inferred triples" option that they offer, the backing store should respect that config option and not include such inferred triples in the result.
I'm using Microsofts Linguistics API. I'm trying to extract specific tokens from the returned tree. I dont see any type of parser for traversing through the tree in any documentation...
One approach I considered was to use the Stanford NLP parser but It seems a little overkill for what I need.
Is there an existing parser that I could use?
here is sample data that is returned. for example what can I use to extract "NNP" (Tom)
[{
"analyzerId": "4FA79AF1-F22C-408D-98BB-B7D7AEEF7F04",
"result": [ ["NNP",",","NNP","."], ["WRB","VBP","PRP","NN","."] ] },
{
"analyzerId": "22A6B758-420F-4745-8A3C-46835A67C0D2",
"result":["(TOP (S (NNP Hi) (, ,) (NNP Tom) (. !)))","(TOP (SBARQ (WHADVP (WRB How)) (SQ (VP (VBP are)) (NP (PRP you)) (NN today) (. ?))))"] }]
Find my parser-to-tree (and roundtripping) source code at:
https://github.com/BSalita/Woundify/blob/master/WoundifyShared/ParseHelpers.cs
ParseHelpers is a file from the Woundify project. One of the functions of the project is to demonstrate the calling and consumption of APIs from all leading AI services providers (Microsoft, Google, HPE, IBM, Wit, Hound, etc.).
I've edited in a usage fragment from command.cs file:
foreach (Newtonsoft.Json.Linq.JToken s in arrayOfResults)
{
ConstituencyTreeNode root = ParseHelpers.ConstituencyTreeFromText(s.ToString());
text = ParseHelpers.TextFromConstituencyTree(root); // roundtrip
if (text != s.ToString()) // original and roundtrip must compare equal
throw new FormatException();
string words = ParseHelpers.WordsFromConstituencyTree(root);
string[] printLines = ParseHelpers.FormatConstituencyTree(root);
foreach (string p in printLines)
Console.WriteLine(p);
}
I'm currently implementing a SBT plugin for Gatling.
One of its features will be to open the last generated report in a new browser tab from SBT.
As each run can have a different "simulation ID" (basically a simple string), I'd like to offer tab completion on simulation ids.
An example :
Running the Gatling SBT plugin will produce several folders (named from simulationId + date of report generaation) in target/gatling, for example mysim-20140204234534, myothersim-20140203124534 and yetanothersim-20140204234534.
Let's call the task lastReport.
If someone start typing lastReport my, I'd like to filter out tab-completion to only suggest mysim and myothersim.
Getting the simulation ID is a breeze, but how can help the parser and filter out suggestions so that it only suggest an existing simulation ID ?
To sum up, I'd like to do what testOnly do, in a way : I only want to suggest things that make sense in my context.
Thanks in advance for your answers,
Pierre
Edit : As I got a bit stuck after my latest tries, here is the code of my inputTask, in it's current state :
package io.gatling.sbt
import sbt._
import sbt.complete.{ DefaultParsers, Parser }
import io.gatling.sbt.Utils._
object GatlingTasks {
val lastReport = inputKey[Unit]("Open last report in browser")
val allSimulationIds = taskKey[Set[String]]("List of simulation ids found in reports folder")
val allReports = taskKey[List[Report]]("List of all reports by simulation id and timestamp")
def findAllReports(reportsFolder: File): List[Report] = {
val allDirectories = (reportsFolder ** DirectoryFilter.&&(new PatternFilter(reportFolderRegex.pattern))).get
allDirectories.map(file => (file, reportFolderRegex.findFirstMatchIn(file.getPath).get)).map {
case (file, regexMatch) => Report(file, regexMatch.group(1), regexMatch.group(2))
}.toList
}
def findAllSimulationIds(allReports: Seq[Report]): Set[String] = allReports.map(_.simulationId).distinct.toSet
def openLastReport(allReports: List[Report], allSimulationIds: Set[String]): Unit = {
def simulationIdParser(allSimulationIds: Set[String]): Parser[Option[String]] =
DefaultParsers.ID.examples(allSimulationIds, check = true).?
def filterReportsIfSimulationIdSelected(allReports: List[Report], simulationId: Option[String]): List[Report] =
simulationId match {
case Some(id) => allReports.filter(_.simulationId == id)
case None => allReports
}
Def.inputTaskDyn {
val selectedSimulationId = simulationIdParser(allSimulationIds).parsed
val filteredReports = filterReportsIfSimulationIdSelected(allReports, selectedSimulationId)
val reportsSortedByDate = filteredReports.sorted.map(_.path)
Def.task(reportsSortedByDate.headOption.foreach(file => openInBrowser((file / "index.html").toURI)))
}
}
}
Of course, openReport is called using the results of allReports and allSimulationIds tasks.
I think I'm close to a functioning input task but I'm still missing something...
Def.inputTaskDyn returns a value of type InputTask[T] and doesn't perform any side effects. The result needs to be bound to an InputKey, like lastReport. The return type of openLastReport is Unit, which means that openLastReport will construct a value that will be discarded, effectively doing nothing useful. Instead, have:
def openLastReport(...): InputTask[...] = ...
lastReport := openLastReport(...).evaluated
(Or, the implementation of openLastReport can be inlined into the right hand side of :=)
You probably don't need inputTaskDyn, but just inputTask. You only need inputTaskDyn if you need to return a task. Otherwise, use inputTask and drop the Def.task.