NFT List Prices - web-scraping

OpenSea allows users to buy and sell NFTs. From OpenSea, you can view the prices of listed NFTs within a project. When an NFT is listed, is the listed price stored on the block chain or is it statically stored only on OpenSea's platform? Ultimately, I am looking for a way to scrape the prices of listed tokens within any NFT project. While I could scrape directly from OpenSea's website, NFT data is lazy loaded which complicates the process of scraping directly from OpenSea.io - I do not wish to use selenium.
tl;dr : Is there any way to determine the price of an NFT token within a project without using OpenSea?

Typically people are "lazy minting" and "listing" via OpenSea website, which means it's not on-chain; you will see on OpenSea that the metadata is "editable" for almost all NFTs listed for sale. Here's an example:
OpenSea listing:
Etherscan address for the person listing: (note: no on-chain transactions). What's the scope of your scraping? Best bet may be to pull via the OpenSea API? https://docs.opensea.io/reference/api-overview

Listing price is different from nft price. Listing price is the fee that you pay for the marketplace. Otherwise everyone would list nft for free and that would cause extra load on the contract and server of website.
When you write an Nft contract, you specify the listing price as:
uint public listingFee=0.025 ether;
Logically listingFee must be on chain since the nft creators are directly interacting with the smart contract.
Price of nft is different. When you create an Nft item, you define a struct:
struct NftItem{
uint tokenId;
uint price;
address creator;
bool isListed;
}
To create an Nft item, define a function:
function _createNftItem(uint tokenId,uint price) private{
require(price > 0, "Price must be at least 1 wei");
// you need to store nft's in a mapping id=>Nft
_idToNftItem[tokenId]=NftItem(
tokenId,
price,
msg.sender,
true
);
// you could emit an nft created event here
}
Price of Nft is determined by you on the fly when you submit the form to create the nft. since nft's are going to be stored on chain as struct, it will include the price
Now you call the mint function:
function mintToken(string memory tokenURI,uint price) public payable returns (uint){
// make sure you dont mint same uri again
require(!tokenURIExists(tokenURI),"Token URI already exists");
// this is where you make sure sender is paying the listig price to use the platform
// this is one time fee. so you can create a mapping and keep track of msg.senders here as bool if they paid the listing price or not
// if they did not pay, you require them to pay
require(msg.value==listingFee,"Price must be equal to listing fee");
.. more logic here
_usedTokenURIs[tokenURI]=true;
return tokenIdOfNewlyCreatetNftItem;
}
I just included the parts that related to your question in mint function.

Related

Api Platform and Embedded Write

I have this issue with my Symfony Api Platform; I have two entities: a news and an author; every news belongs to an author.
I want to expone an API to let some external clients to add a news, using a JSON with author details; if an author already exists, the platform should be able to retrieve the author for that news, and store the news.
I embed a JSON as author, the Platform will always create a new author, also if an author with same key ( let the slug ) already exists.
Which is the best way to handle this situation? Let I have this payload:
{
"author": {
"slug": "luca"
}
"title":"news Title"
}
I expect that the API search for a user, add OR create a new one, than save the news for that author.
I add a unique constrain based the author slug, I got a database error because the platform try to store a new author every time.
But I don't want to have a public API where the client has to add an IRI , cause the client should FIRST add an author, IF NOT EXISTS on my db, THAN use the IRI ... too difficult.
Which is for you the best way to archieve this? Using some kind of event subscriber? or a DTO?
Thanks in advance

How to store keywords in firebase firestore

My application use keywords extensively, everything is tagged with keywords, so whenever use wants to search data or add data I have to show keywords in auto complete box.
As of now I am storing keywords in another collection as below
export interface IKeyword {
Id:string;
Name:string;
CreatedBy:IUserMin;
CreatedOn:firestore.Timestamp;
}
export interface IUserMin {
UserId:string;
DisplayName:string;
}
export interface IKeywordMin {
Id:string;
Name:string;
}
My main document holds array of Keywords
export interface MainDocument{
Field1:string;
Field2:string;
........
other fields
........
Keywords:IKeywordMin[];
}
But problem is auto complete reads data frequently and my document reads quota increases very fast.
Is there a way to implement this without increasing reads for keyword ? Because keyword is not the real data we need to get.
Below is my query to get main documents
query = query.where("Keywords", "array-contains-any", keywords)
I use below query to get keywords in auto complete text box
query = query.orderBy("Name").startAt(searchTerm).endAt(searchTerm+ '\uf8ff').limit(20)
this query run many times when user types auto complete search which is causing more document reads
Does this answer your question
https://fireship.io/lessons/typeahead-autocomplete-with-firestore/
Though the receommended solution is to use 3rd party tool
https://firebase.google.com/docs/firestore/solutions/search
To reduce documents read:
A solution that come to my mind however I'm not sure if it's suitable for your use case is using Firestore caching feature. By default, firestore client will always try to reach the server to get the new changes on your documents and if it cannot reach the server, it will reach to the cached data on the client device. you can take advantage of this feature by using the cache first and reach the server only when you want. For web application, this feature is disabled by default and you can enable it like in
https://firebase.google.com/docs/firestore/manage-data/enable-offline
to help you understand this feature more check this article:
https://firebase.google.com/docs/firestore/manage-data/enable-offline
I found a solution, thought I would share here
Create a new collection named typeaheads in below format
export interface ITypeAHead {
Prefix:string;
CollectionName:string;
FieldName:string;
MatchingValues:ILookupItem[]
}
export interface ILookupItem {
Key:string;
Value:string;
}
depending on the minimum letters add either 2 or 3 letters to Prefix, and search based on the prefix, collection and field. so most probably you will end up with 2 or 3 document reads for on search.
Hope this helps someone else.

Get the date/time a badge was awarded?

Using this, it is possible to take the badges from a specific user of Stack Overflow:
library(stackr)
badges <- stack_users(9371451, "badges", num_pages=100000, pagesize=100)
How can I add a parameter to take also the timestamp that the badge was awarded to the user? And if possible, for which answer?
you can use users/{ids}/timeline. See description page:
Returns a subset of the actions the users in {ids} have taken on the site.
This method returns users' posts, edits, and earned badges in the order they were accomplished.
library(stackr)
df_timeline <- stackr:::stack_GET("users/9371451/timeline", num_pages = 10000)
The ::: is necessary because the function stack_GET is an internal command
It is possible-ish with the Stack Exchange API, but not with the stackr library you are using.
The /users/{ids}/badges route returns a list of badge objects, which only has these possible properties:
award_count integer
badge_id integer, refers to a badge
badge_type one of named, or tag_based
description string
link string
name string
rank one of gold, silver, or bronze
user shallow_user
So you can't get the timestamp or triggering post there.
However, you can get this information (mostly) from the /notifications route, which can return results like:
{ "items": [ {
"site": {"site_url": "https://webmasters.stackexchange.com"},
"is_unread": false,
"creation_date": 1520234766,
"notification_type": "badge_earned",
"body": `You've earned the \"Notable Question\" badge for
<a href=\"http://webmasters.stackexchange.com/questions/65822\">
How to bulk delete email accounts from cPanel / my hosting account?</a>.`
// Manually wrapped for this post
},
etc.
But, important:
/notifications requires authentication and only works for a logged-in (via the API) user.
That stackr library does not support authentication. (See your previous question in a bit.)
/notifications returns all of a given user's Stack Exchange sites, so you will have to filter out the ones you are not interested in.
/notifications returns several kinds of notices, so you will have to filter out the ones that are not badge related.
/notifications does not return badge details like rank, so you will still need to call /users/{ids}/badges and marry the results.
For higher rep users, it is possible that you would exhaust your API quota before being able to fetch all of that user's notifications.

youtube channel new ID and iframe list user_uploads

It seems that youtube are now using ID's for their channels instead of names (part of the V3 api)
However it seems that the embedded iframe playlist player cannot handle these channel ID's
example channel https://www.youtube.com/channel/UCpAOGs57EWRvOPXQhnYHpow
then ID is UCpAOGs57EWRvOPXQhnYHpow
Now try to load this
http://www.youtube.com/embed/?listType=user_uploads&list=UCpAOGs57EWRvOPXQhnYHpow
Can anyone shine a light on this issue ? Or is there some hidden username ?
I also placed this question at the gdata-issues website http://code.google.com/p/gdata-issues/issues/detail?id=6463
The issue here is that a channel is not a playlist; channels can have multiple playlists, yet the listType parameter is designed to look for an actual playlist info object. The documented way around this is to use the data API and call the channel endpoint, looking at the contentDetails part:
GET https://www.googleapis.com/youtube/v3/channels?part=contentDetails&id=UCuo5NTU3pmtPejmlzjCgwdw&key={YOUR_API_KEY}
The result will give you all of the feeds associated with that channel that you can choose from:
"contentDetails": {
"relatedPlaylists": {
"uploads": "UUuo5NTU3pmtPejmlzjCgwdw"
}
}
If available (sometimes with oAuth), there could also be "watch later" lists, "likes" lists, etc.
This may seem like a lot of overhead. In the short term, though, it can be noted that the different feeds are programmatically named; so, for example, if my user channel begins with UC and then a long string, that UC stands for 'user channel' -- and the uploads feed would begin with 'UU' (user uploads) and then have the rest of the same long string. (you'd also have 'LL' for the likes list, 'WL' for the watch later list, 'HL' for the history list, 'FL' for the favorites list, etc. This is NOT documented, and so there's no guarantee that such a naming convention will perpetuate. But at least for now, you could change your ID string from beginning with UC to beginning with UU, like this:
http://www.youtube.com/embed/?listType=user_uploads&list=UUpAOGs57EWRvOPXQhnYHpow
And it embeds nicely.
Just to inform on current state of things -- the change suggested by jlmcdonald doesn't work anymore, but you can still get a proper embed link via videoseries (with the same UC to UU change). I.o.w. link like
http://www.youtube.com/embed/videoseries?list=UUpAOGs57EWRvOPXQhnYHpow
works as of at the moment of writing this.

I want to port my delicious bookmarks to my website

I started building a app that will automatically download my delicious bookmarks, and save to a database, so they I can view them on my own website in my favoured format.
I am forced to use oAuth, as I have a yahoo id to login to delicious. The problem is I am stuck at the point where oAuth requires a user to manually go and authenticate.
Is there a code/ guidelines available anywhere I can follow? All I want is a way to automatically save my bookmarks to my database.
Any help is appreciated. I can work on java, .net and php. Thanks.
Delicious Provides an API for this already:
https://api.del.icio.us/v1/posts/all?
Returns all posts. Please use sparingly. Call the update function to see if you need to fetch this at all.
Arguments
&tag={TAG}
(optional) Filter by this tag.
&start={#}
(optional) Start returning posts this many results into the set.
&results={#}
(optional) Return this many results.
&fromdt={CCYY-MM-DDThh:mm:ssZ}
(optional) Filter for posts on this date or later
&todt={CCYY-MM-DDThh:mm:ssZ}
(optional) Filter for posts on this date or earlier
&meta=yes
(optional) Include change detection signatures on each item in a 'meta' attribute. Clients wishing to maintain a synchronized local store of bookmarks should retain the value of this attribute - its value will change when any significant field of the bookmark changes.
Example
$ curl https://user:passwd#api.del.icio.us/v1/posts/all
<posts tag="" user="user">
<post href="http://www.weather.com/" description="weather.com"
hash="6cfedbe75f413c56b6ce79e6fa102aba" tag="weather reference"
time="2005-11-29T20:30:47Z" />
...
<post href="http://www.nytimes.com/"
description="The New York Times - Breaking News, World News & Multimedia"
extended="requires login" hash="ca1e6357399774951eed4628d69eb84b"
tag="news media" time="2005-11-29T20:30:05Z" />
</posts>
There are also public and private RSS feeds for bookmarks, so if you can read and parse XML you don't necessarily need to use the API.
Note however that if you registered with Delicious after December, and therefore use your Yahoo account, the above will not work and you'll need to use OAuth.
There are a number of full examples on the Delicious support site, see for example: http://support.delicious.com/forum/comments.php?DiscussionID=3698

Resources