While learning Rust I am trying to build a simple web scraper. My aim is to scrape https://news.ycombinator.com/ and get the title, hyperlink, votes and username. I am using the external libraries reqwest and scraper for this and wrote a program which scrapes the HTML link from that site.
Cargo.toml
[package]
name = "stackoverflow_scraper"
version = "0.1.0"
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
scraper = "0.12.0"
reqwest = "0.11.2"
tokio = { version = "1", features = ["full"] }
futures = "0.3.13"
src/main.rs
use scraper::{Html, Selector};
use reqwest;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let url = "https://news.ycombinator.com/";
let html = reqwest::get(url).await?.text().await?;
let fragment = Html::parse_fragment(html.as_str());
let selector = Selector::parse("a.storylink").unwrap();
for element in fragment.select(&selector) {
println!("{:?}",element.value().attr("href").unwrap());
// todo println!("Title");
// todo println!("Votes");
// todo println!("User");
}
Ok(())
}
How do I get its corresponding title, votes and username?
The items on the front page are stored in a table with class .itemlist.
As each item is made out of three consecutive <tr>, you'll have to iterate over them in chunks of three. I opted to first gather all the nodes.
The first row contains the:
Title
Domain
The second row contains the:
Points
Author
Post age
The third row is a spacer that should be ignored.
Note:
Posts created within the last hour seemingly do not display any points, so this needs to be handled accordingly.
Advertisements do not contain a username.
The last two table rows, tr.morespace and the tr containing a.morelink should be ignored. This is why I opted to first .collect() the nodes and then use .chunks_exact().
use reqwest;
use scraper::{Html, Selector};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let url = "https://news.ycombinator.com/";
let html = reqwest::get(url).await?.text().await?;
let fragment = Html::parse_fragment(html.as_str());
let selector_items = Selector::parse(".itemlist tr").unwrap();
let selector_title = Selector::parse("a.storylink").unwrap();
let selector_score = Selector::parse("span.score").unwrap();
let selector_user = Selector::parse("a.hnuser").unwrap();
let nodes = fragment.select(&selector_items).collect::<Vec<_>>();
let list = nodes
.chunks_exact(3)
.map(|rows| {
let title_elem = rows[0].select(&selector_title).next().unwrap();
let title_text = title_elem.text().nth(0).unwrap();
let title_href = title_elem.value().attr("href").unwrap();
let score_text = rows[1]
.select(&selector_score)
.next()
.and_then(|n| n.text().nth(0))
.unwrap_or("0 points");
let user_text = rows[1]
.select(&selector_user)
.next()
.and_then(|n| n.text().nth(0))
.unwrap_or("Unknown user");
[title_text, title_href, score_text, user_text]
})
.collect::<Vec<_>>();
println!("links: {:#?}", list);
Ok(())
}
That should net you the following list:
[
[
"Docker for Mac M1 RC",
"https://docs.docker.com/docker-for-mac/apple-m1/",
"327 points",
"mikkelam",
],
[
"A Mind Is Born – A 256 byte demo for the Commodore 64 (2017)",
"https://linusakesson.net/scene/a-mind-is-born/",
"226 points",
"matthewsinclair",
],
[
"Show HN: Video Game in a Font",
"https://www.coderelay.io/fontemon.html",
"416 points",
"ghub-mmulet",
],
...
]
Alternatively, there is an API available that one can use:
GitHub, HackerNews API
This is more of a selectors question, and it depends on the html of the site being scraped. In this case, it's easy to get the title, but harder to get the points and user. Since the selector you're using selects the link which contains both the href and title, you can get the title using the .text() method
let title = element.text().collect::<Vec<_>>();
where element is the same as for the href
To get the other values however, it would be easier to change the first selector and get the data from that. Since the title and link of a news item on news.ycombinator.com is in a element with the .athing class, and the votes and user are in the next element, which doesn't have a class (making it harder to select), it might be best to select "table.itemlist tr.athing" and iterate over those results. From each element found, you can then subselect the "a.storylink" element, and separately get the following tr element and subselecting the points and user elements
let select_item = Selector::parse("table.itemlist tr.athing").unwrap();
let select_link = Selector::parse("a.storylink").unwrap();
let select_score = Selector::parse("span.score").unwrap();
for element in fragment.select(&select_item) {
// Get the link element that contains the href and title
let link_el = element.select(&select_link).next().unwrap();
println!("{:?}", link_el.value().attr("href").unwrap());
// Get the next tr element that follows the first, with score and user
let details_el = ElementRef::wrap(element.next_sibling().unwrap()).unwrap();
// Get the score element from within the second row element
let score = details_el.select(&select_score).next().unwrap();
println!("{:?}", score.text().collect::<Vec<_>>());
}
This only shows getting the href and score. I'll leave it to you get the user from details_el
Related
Update:
add my Cargo.toml in case needed:
serde = { version = "1.0.117", default-features = false }
serde_json = "1.0.66"
sql-builder = "3.1"
sqlite = "0.26.0"
Update:
i did #cdhowie says on comment, add println!("{:?}", row[2].kind());, it's print String.
it's the way i store blob wrong, shouldn't use serde_json::to_string or something?
OP:
i writed a demo to learn sqlite and rust, here the code:
use std::{fs::File, io::{Read, Write}};
use sql_builder::{quote, SqlBuilder};
use sqlite::Connection;
fn main() {
// create sqlite databases on ./tmp/sqlite.db
let conn = Connection::open("./tmp/sqlite.db").unwrap();
// create table
conn.execute(
"CREATE TABLE IF NOT EXISTS icon (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT,
content BLOB,
used STRING
)",
)
.unwrap();
// read image file from disk and store to sqlite as blob
let mut file = File::open("./tmp/in.jpg").unwrap();
let mut contents = Vec::new();
file.read_to_end(&mut contents).unwrap();
println!("{:?}", contents);
// build sql query
let sql = SqlBuilder::insert_into("icon")
.fields(&["name", "content", "used"])
.values(&[
quote(serde_json::to_string("in").unwrap()),
quote(serde_json::to_string(&contents).unwrap()),
quote(serde_json::to_string("1").unwrap()),
])
.sql().unwrap();
// execute query
conn.execute(&sql).unwrap();
// read image file from sqlite and store to disk
let mut builder = SqlBuilder::select_from("icon");
builder.field("id");
builder.field("name");
builder.field("content");
builder.field("used");
let stmt = conn.prepare(&builder.sql().unwrap()).unwrap();
let mut cursor = stmt.into_cursor();
let row = cursor.next().unwrap().unwrap();
let id = row[0].as_integer().unwrap();
let name = row[1].as_string().unwrap();
let content = row[2].as_binary().unwrap(); // src/main.rs:51:38
let used = row[3].as_string().unwrap();
println!("{} {} {}", id, name, used);
let mut file = File::create("./tmp/out.jpg").unwrap();
file.write_all(content).unwrap();
}
when i run this code, will got error:
thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', src/main.rs:51:38
but where i check the sqlite.db file, it shows:
seems file stored success. but how can i fix the code make read form sqlite and store to disk work?
if you guys need more info, please let me know. :)
I can fix the issue now, but this may not be the best solution.
// ...
let content = row[2].as_string().unwrap();
// ...
// ...
let content: Vec<u8> = serde_json::from_str(&content).unwrap();
file.write_all(&content).unwrap();
// ...
also, I may have found a better one SQL generation lib, called diesel.
Maybe as I keep learning, I might find a better, more elegant way. Anyway, it's working now. :)
Realm allows you to receive the results of a query in sorted order.
let realm = try! Realm()
let dogs = realm.objects(Dog.self)
let dogsSorted = dogs.sorted(byKeyPath: "name", ascending: false)
I ran this test to see how quickly realm returns sorted data
import Foundation
import RealmSwift
class TestModel: Object {
#Persisted(indexed: true) var value: Int = 0
}
class RealmSortTest {
let documentCount = 1000000
var smallestValue: TestModel = TestModel()
func writeData() {
let realm = try! Realm()
var documents: [TestModel] = []
for _ in 0 ... documentCount {
let newDoc = TestModel()
newDoc.value = Int.random(in: 0 ... Int.max)
documents.append(newDoc)
}
try! realm.write {
realm.deleteAll()
realm.add(documents)
}
}
func readData() {
let realm = try! Realm()
let sortedResults = realm.objects(TestModel.self).sorted(byKeyPath: "value")
let start = Date()
self.smallestValue = sortedResults[0]
let end = Date()
let delta = end.timeIntervalSinceReferenceDate - start.timeIntervalSinceReferenceDate
print("Time Taken: \(delta)")
}
func updateSmallestValue() {
let realm = try! Realm()
let sortedResults = realm.objects(TestModel.self).sorted(byKeyPath: "value")
smallestValue = sortedResults[0]
print("Originally loaded smallest value: \(smallestValue.value)")
let newSmallestValue = TestModel()
newSmallestValue.value = smallestValue.value - 1
try! realm.write {
realm.add(newSmallestValue)
}
print("Originally loaded smallest value after write: \(smallestValue.value)")
let readStart = Date()
smallestValue = sortedResults[0]
let readEnd = Date()
let readDelta = readEnd.timeIntervalSinceReferenceDate - readStart.timeIntervalSinceReferenceDate
print("Reloaded smallest value \(smallestValue.value)")
print("Time Taken to reload the smallest value: \(readDelta)")
}
}
With documentCount = 100000, readData() output:
Time taken to load smallest value: 0.48901796340942383
and updateData() output:
Originally loaded smallest value: 2075613243102
Originally loaded smallest value after write: 2075613243102
Reloaded smallest value 2075613243101
Time taken to reload the smallest value: 0.4624580144882202
With documentCount = 1000000, readData() output:
Time taken to load smallest value: 4.807577967643738
and updateData() output:
Originally loaded smallest value: 4004790407680
Originally loaded smallest value after write: 4004790407680
Reloaded smallest value 4004790407679
Time taken to reload the smallest value: 5.2308430671691895
The time taken to retrieve the first document from a sorted result set is scaling with the number of documents stored in realm rather than the number of documents being retrieved. This indicates to me that realm is sorting all of the documents at query time rather than when the documents are being written. Is there a way to index your data so that you can quickly retrieve a small number of sorted documents?
Edit:
Following discussion in the comments, I updated the code to load only the smallest value from the sorted collection.
Edit 2
I updated the code to observe the results as suggested in the comments.
import Foundation
import RealmSwift
class TestModel: Object {
#Persisted(indexed: true) var value: Int = 0
}
class RealmSortTest {
let documentCount = 1000000
var smallestValue: TestModel = TestModel()
var storedResults: Results<TestModel> = (try! Realm()).objects(TestModel.self).sorted(byKeyPath: "value")
var resultsToken: NotificationToken? = nil
func writeData() {
let realm = try! Realm()
var documents: [TestModel] = []
for _ in 0 ... documentCount {
let newDoc = TestModel()
newDoc.value = Int.random(in: 0 ... Int.max)
documents.append(newDoc)
}
try! realm.write {
realm.deleteAll()
realm.add(documents)
}
}
func observeData() {
let realm = try! Realm()
print("Loading Data")
let startTime = Date()
self.storedResults = realm.objects(TestModel.self).sorted(byKeyPath: "value")
self.resultsToken = self.storedResults.observe { changes in
let observationTime = Date().timeIntervalSince(startTime)
print("Time to first observation: \(observationTime)")
let firstTenElementsSlice = self.storedResults[0..<10]
let elementsArray = Array(firstTenElementsSlice) //print this if you want to see the elements
elementsArray.forEach { print($0.value) }
let moreElapsed = Date().timeIntervalSince(startTime)
print("Time to printed elements: \(moreElapsed)")
}
}
}
and I got the following output
Loading Data
Time to first observation: 5.252112984657288
3792614823099
56006949537408
Time to printed elements: 5.253015995025635
Reading the data with an observer did not reduce the time taken to read the data.
At this time it appears that Realm sorts data when it is accessed rather than when it is written, and there is not a way to have Realm sort data at write time. This means that accessing sorted data scales with the number of documents in the database rather than the number of documents being accessed.
The actual time taken to access the data varies by use case and platform.
dogs and dogsSorted are Realm Results Collection object that essentially contains pointers to the underlying data, not the data itself.
Defining a sort order does NOT load all of the objects and they remain lazy - only loading as needed, which is one of the huge benefits to Realm; giant datasets can be used without worrying about overloading memory.
It's also one of the reasons that Realm Results objects always reflect the current state of the data of the underlying data; that data can change many times and what you see in your app Results vars (and Realm Collections in general) will always show the updated data.
As a side node, at this time working with Realm Collection objects with Swift High Level functions causes that data to load into memory - so don't do that. Sort, Filter etc with Realm functions and everything stays lazy and memory friendly.
Indexing is a trade off; on one hand it can improve the performance of certain queries like an equality ( "name == 'Spot'" ) but on the other hand it can slow down write performance. Additionally, adding indexes takes up a bit more space.
Generally speaking, indexing is best for specific use cases; maybe in a situation were you doing some kind of type ahead autofill where performance is critical. We have several apps with very large datasets (Gb's) and nothing is indexed because the performance advantage received is offset by slower writes, which are done frequently. I suggest starting without indexing.
EDIT:
Going to update the answer based on additional discussion.
First and foremost, copying data from one object to another is not a measure of database loading performance. The real objective here is the user experience and/or being able to access that data - from the time the user expects to see the data to when it's shown. So let's provide some code to demonstrate general performance:
We'll first start with a similar model to what the OP used
class TestModel: Object {
#Persisted(indexed: true) var value: Int = 0
convenience init(withIndex: Int) {
self.init()
self.value = withIndex
}
}
Then define a couple of vars to hold the Results from disk and a notification token which allows us to know when that data is available to be displayed to the user. And then lastly a var to hold the time of when the loading starts
var modelResults: Results<TestModel>!
var modelsToken: NotificationToken?
var startTime = Date()
Here's the function that writes lots of data. The objectCount var will be changed from 10,000 objects on the first run to 1,000,000 objects on the second. Note this is bad coding as I am creating a million objects in memory so don't do this; for demonstration purposes only.
func writeLotsOfData() {
let realm = try! Realm()
let objectCount = 1000000
autoreleasepool {
var testModelArray = [TestModel]()
for _ in 0..<objectCount {
let m = TestModel(withIndex: Int.random(in: 0 ... Int.max))
testModelArray.append(m)
}
try! realm.write {
realm.add(testModelArray)
}
print("data written: \(testModelArray.count) objects")
}
}
and then finally the function that loads those objects from realm and outputs when the data is available to be shown to the user. Note they are sorted per the original question - and in fact will maintain their sort as data is added and changed! Pretty cool stuff.
func loadBigData() {
let realm = try! Realm()
print("Loading Data")
self.startTime = Date()
self.modelResults = realm.objects(TestModel.self).sorted(byKeyPath: "value")
self.modelsToken = self.modelResults?.observe { changes in
let elapsed = Date().timeIntervalSince(self.startTime)
print("Load completed of \(self.modelResults.count) objects - elapsed time of \(elapsed)")
}
}
and the results. Two runs, one with 10,000 objects and one with 1,000,000 objects
data written: 10000 objects
Loading Data
Load completed of 10000 objects - elapsed time of 0.0059670209884643555
data written: 1000000 objects
Loading Data
Load completed of 1000000 objects - elapsed time of 0.6800119876861572
There are three things to note
A Realm Notification object fires an event when the data has
completed loading, and also when there are additional changes. We are
leveraging that to notify the app when the data has completed loading
and is available to be used - shown to the user for example.
We are lazily loading all of the objects! At no point are we going
to run into a memory overloading issue. Once the objects have loaded
into the results, they are then freely available to be shown to the
user or processed in whatever way is needed. Super important to work
with Realm objects in a Realm way when working with large datasets.
Generally speaking, if it's 10 objects well, no problem tossing
them into an array, but when there are 1 Million objects - let Realm
do it's lazy job.
The app is protected using the above code and techniques. There
could be 10 objects or 1,000,000 objects and the memory impact is
minimal.
EDIT 2
(see comment to the OP's question for more info about this edit)
Per a request fromt the OP, they wanted to see the same exercise with printed values and times. Here's the updated code
self.modelsToken = self.modelResults?.observe { changes in
let elapsed = Date().timeIntervalSince(self.startTime)
print("Load completed of \(self.modelResults.count) objects - elapsed time of \(elapsed)")
print("print first 10 object values")
let firstTenElementsSlice = self.modelResults[0..<10]
let elementsArray = Array(firstTenElementsSlice) //print this if you want to see the elements
elementsArray.forEach { print($0.value)}
let moreElapsed = Date().timeIntervalSince(self.startTime)
print("Printing of 10 elements completed: \(moreElapsed)")
}
and then the output
Loading Data
Load completed of 1000000 objects - elapsed time of 0.6730009317398071
print first 10 object values
12264243738520
17242140785413
29611477414437
31558144830373
32913160803785
45399774467128
61700529799916
63929929449365
73833938586206
81739195218861
Printing of 10 elements completed: 0.6745189428329468
I Have the following model
class Process: Object {
#objc dynamic var processID:Int = 1
let steps = List<Step>()
}
class Step: Object {
#objc private dynamic var stepCode: Int = 0
#objc dynamic var stepDateUTC: Date? = nil
var stepType: ProcessStepType {
get {
return ProcessStepType(rawValue: stepCode) ?? .created
}
set {
stepCode = newValue.rawValue
}
}
}
enum ProcessStepType: Int { // to review - real value
case created = 0
case scheduled = 1
case processing = 2
case paused = 3
case finished = 4
}
A process can start, processing , paused , resume (to be in step processing again), pause , resume again,etc. the current step is the one with the latest stepDateUTC
I am trying to get all Processes, having for last step ,a step of stepType processing "processing ", ie. where for the last stepDate, stepCode is 2 .
I came with the following predicate... which doesn't work. Any idea of the right perform to perform such query ?
my best trial is the one. Is it possible to get to this result via one realm query .
let processes = realm.objects(Process.self).filter(NSPredicate(format: "ANY steps.stepCode = 2 AND NOT (ANY steps.stepCode = 4)")
let ongoingprocesses = processes.filter(){$0.steps.sorted(byKeyPath: "stepDateUTC", ascending: false).first!.stepType == .processing}
what I hoped would work
NSPredicate(format: "steps[LAST].stepCode = \(TicketStepType.processing.rawValue)")
I understand [LAST] is not supported by realm (as per the cheatsheet). but is there anyway around I could achieve my goal through a realm query?
There are a few ways to approach this and it doesn't appear the date property is relevant because lists are stored in sequential order (as long as they are not altered), so the last element in the List was added last.
This first piece of code will filter for processes where the last element is 'processing'. I coded this long-handed so the flow is more understandable.
let results = realm.objects(Process.self).filter { p in
let lastIndex = p.steps.count - 1
let step = p.steps[lastIndex]
let type = step.stepType
if type == .processing {
return true
}
return false
}
Note that Realm objects are lazily loaded - which means thousands of objects have a low memory impact. By filtering using Swift, the objects are filtered in memory so the impact is more significant.
The second piece of code is what I would suggest as it makes filtering much simpler, but would require a slight change to the Process model.
class Process: Object {
#objc dynamic var processID:Int = 1
let stepHistory = List<Step>() //RENAMED: the history of the steps
#objc dynamic var name = ""
//ADDED: new property tracks current step
#objc dynamic var current_step = ProcessStepType.created.index
}
My thought here is that the Process model keeps a 'history' of steps that have occurred so far, and then what the current_step is.
I also modified the ProcessStepType enum to make it more filterable friendly.
enum ProcessStepType: Int { // to review - real value
case created = 0
case scheduled = 1
case processing = 2
case paused = 3
case finished = 4
//this is used when filtering
var index: Int {
switch self {
case .created:
return 0
case .scheduled:
return 1
case .processing:
return 2
case .paused:
return 3
case .finished:
return 4
}
}
}
Then to return all processes where the last step in the list is 'processing' here's the filter
let results2 = realm.objects(Process.self).filter("current_step == %#", ProcessStepType.processing.index)
The final thought is to add some code to the Process model so when a step is added to the list, the current_step var is also updated. Coding that is left to the OP.
I am using Swift in a project, and using SQLite.swift for database handling. I am trying to retrieve the most recent entry from my database like below:
func returnLatestEmailAddressFromEmailsTable() -> String{
let dbPath = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true).first as String
let db = Database("\(dbPath)/db.sqlite3")
let emails = db["emails"]
let email = Expression<String>("email")
let time = Expression<Int>("time")
var returnEmail:String = ""
for res in emails.limit(1).order(time.desc) {
returnEmail = res[email]
println("from inside: \(returnEmail)")
}
return returnEmail
}
I am trying to test the returned string from the above function like this:
println("from outside: \(returnLatestEmailAddressFromEmailsTable())")
Note how I print the value from both inside and outside of the function. Inside, it works every single time. I am struggling with the "from outside:" part.
Sometimes the function returns the correct email, but sometimes it returns "" (presumably, the value was not set in the for loop).
How can I add "blocking" functionality so calling returnLatestEmailAddressFromEmailsTable() will always first evaluate the for loop, and only after this return the value?
Does Scala support something like dynamic properties? Example:
val dog = new Dynamic // Dynamic does not define 'name' nor 'speak'.
dog.name = "Rex" // New property.
dog.speak = { "woof" } // New method.
val cat = new Dynamic
cat.name = "Fluffy"
cat.speak = { "meow" }
val rock = new Dynamic
rock.name = "Topaz"
// rock doesn't speak.
def test(val animal: Any) = {
animal.name + " is telling " + animal.speak()
}
test(dog) // "Rex is telling woof"
test(cat) // "Fluffy is telling meow"
test(rock) // "Topaz is telling null"
What is the closest thing from it we can get in Scala? If there's something like "addProperty" which allows using the added property like an ordinary field, it would be sufficient.
I'm not interested in structural type declarations ("type safe duck typing"). What I really need is to add new properties and methods at runtime, so that the object can be used by a method/code that expects the added elements to exist.
Scala 2.9 will have a specially handled Dynamic trait that may be what you are looking for.
This blog has a big about it: http://squirrelsewer.blogspot.com/2011/02/scalas-upcoming-dynamic-capabilities.html
I would guess that in the invokeDynamic method you will need to check for "name_=", "speak_=", "name" and "speak", and you could store values in a private map.
I can not think of a reason to really need to add/create methods/properties dynamically at run-time unless dynamic identifiers are also allowed -and/or- a magical binding to an external dynamic source (JRuby or JSON are two good examples).
Otherwise the example posted can be implemented entirely using the existing static typing in Scala via "anonymous" types and structural typing. Anyway, not saying that "dynamic" wouldn't be convenient (and as 0__ pointed out, is coming -- feel free to "go edge" ;-).
Consider:
val dog = new {
val name = "Rex"
def speak = { "woof" }
}
val cat = new {
val name = "Fluffy"
def speak = { "meow" }
}
// Rock not shown here -- because it doesn't speak it won't compile
// with the following unless it stubs in. In both cases it's an error:
// the issue is when/where the error occurs.
def test(animal: { val name: String; def speak: String }) = {
animal.name + " is telling " + animal.speak
}
// However, we can take in the more general type { val name: String } and try to
// invoke the possibly non-existent property, albeit in a hackish sort of way.
// Unfortunately pattern matching does not work with structural types AFAIK :(
val rock = new {
val name = "Topaz"
}
def test2(animal: { val name: String }) = {
animal.name + " is telling " + (try {
animal.asInstanceOf[{ def speak: String }).speak
} catch { case _ => "{very silently}" })
}
test(dog)
test(cat)
// test(rock) -- no! will not compile (a good thing)
test2(dog)
test2(cat)
test2(rock)
However, this method can quickly get cumbersome (to "add" a new attribute one would need to create a new type and copy over the current data into it) and is partially exploiting the simplicity of the example code. That is, it's not practically possible to create true "open" objects this way; in the case for "open" data a Map of sorts is likely a better/feasible approach in the current Scala (2.8) implementation.
Happy coding.
First off, as #pst pointed out, your example can be entirely implemented using static typing, it doesn't require dynamic typing.
Secondly, if you want to program in a dynamically typed language, program in a dynamically typed language.
That being said, you can actually do something like that in Scala. Here is a simplistic example:
class Dict[V](args: (String, V)*) extends Dynamic {
import scala.collection.mutable.Map
private val backingStore = Map[String, V](args:_*)
def typed[T] = throw new UnsupportedOperationException()
def applyDynamic(name: String)(args: Any*) = {
val k = if (name.endsWith("_=")) name.dropRight(2) else name
if (name.endsWith("_=")) backingStore(k) = args.first.asInstanceOf[V]
backingStore.get(k)
}
override def toString() = "Dict(" + backingStore.mkString(", ") + ")"
}
object Dict {
def apply[V](args: (String, V)*) = new Dict(args:_*)
}
val t1 = Dict[Any]()
t1.bar_=("quux")
val t2 = new Dict("foo" -> "bar", "baz" -> "quux")
val t3 = Dict("foo" -> "bar", "baz" -> "quux")
t1.bar // => Some(quux)
t2.baz // => Some(quux)
t3.baz // => Some(quux)
As you can see, you were pretty close, actually. Your main mistake was that Dynamic is a trait, not a class, so you can't instantiate it, you have to mix it in. And you obviously have to actually define what you want it to do, i.e. implement typed and applyDynamic.
If you want your example to work, there are a couple of complications. In particular, you need something like a type-safe heterogenous map as a backing store. Also, there are some syntactic considerations. For example, foo.bar = baz is only translated into foo.bar_=(baz) if foo.bar_= exists, which it doesn't, because foo is a Dynamic object.