Ton shows the way again
Once more, Ton has not only shown that he is thinking about similar personal information tooling challenges, but is ahead of me in implementing another personal tool to support his workflow.
Ton has already taken the steps of implementing a feed reader running on his local machine, to which he has added the ability to post commentary on tagged feed items to his public website (which I think runs on wordpress).
In this post he describes augmenting the tool, to allow him the option to post his clipped material and annotations into a Markdown file in his local notes repo. Given that Ton had already implemented his feed-reader client as PHP-based runnning in a local web server this approach seems extremely pragmatic.
My process
My personal information environment is somewhat different at the implementation layer (although broadly similar in terms of information flows).
At the moment I don’t feel that using a commercial feed reader (Feed.ly) is the biggest pain point in my process. I value the portability of a cloud solution, and in particular the availability of a mobile app - most of my initial feed skimming is done on my phone in odd moments, and I use IFTTT to post saved items into my Diigo library.
At the point I grab stuff into Diigo (either from Feedly or during normal web searches) I may have multiple outcomes in mind, broadly in these groups:
- something I’ve seen in a feed that I want to do “something” with later, in other words one of the other reasons in this list
- just a quick save of something I have opened (usually on my phone) and will want to come back to “at some point”, giving me portability of the bookmark across devices and allowing me to clear down open tabs
- whilst engaged on a work problem I will almost always end up with a dozen tabs open (different pieces of documentation, background reading, or examples of how someone has tackled the problem before) - I usually tag these into Diigo so I can come back easily to them if I have to pick the problem up again and don’t want to search down a list of Google results to find them again
- something I think warrants further inspection that I want to grab into my reference library, possibly with initial annotations added in Diigo.
Of these, only the last really merits a permanent home in my notes, although of course information can move between categories - for example something I refer to while solving a problem might warrant deeper review after the immediate need.
Technical implementation
I’m behind Ton on this, still thinking through what approach to use, and indeed this post is part of that process.
I think it’s worth splitting the problem into
- sources
- destinations
- processing
Sources
Whatever I build has to pull information from two sources:
- Diigo (potentially only bookmarks with a specific tag)
- Hypothes.is (only just starting to play with this, but if I can’t see how I can process what I might capture with the tool, there is no point in starting down this track)
Both of these services have APIs.
Destinations
All of my personal knowledge library is held and processed as Markdown files (website, public notes, private notes), so adding to the library is a matter of either saving files into the local copies of those repositories, or posting them to the relevant github repo.
Processing
My initial thought was to implement a fairly bare-bones script that would pull bookmarks from Diigo, complete with annotations, and then save into a rough form in my library for further manual editing.
This would have the advantages of simple implementation and the potential for easy automation, for example by running it in a Github action triggered on a schedule. The obvious downside is that in practice it would fill up my notes inbox with material that had not yet received any mental processing, in effect a local duplication of information at the same state as held in Diigo.
I’m well aware from previous reflection that sense-making is definitely the weak link in how I practice Harold Jarche’s Seek Sense Share approach, and sense-making requires mental input.
Although it will be more work I think the right approach mirrors Ton’s, that is only to bring items into my core library as an intentional act, and with some initial processing.
That implies a user interface for selection of items (possibly from a pre-filtered list drawn from an API), integration of new commentary with clips and annotations that may come from the upstream tool, and selection of one or more destinations.
As my primary text editing tool is VS Code combined with the Foam plugin, there is an attraction in using that to host this curation interface.
I’ve never written a VS Code extension, but I have looked at a couple and the code structure is reasonably clear. They use use Typescript, with which I have some level of knowledge, and the main challenge will be to understand the VS Code extension API.
Anyway…
(as an aside - when writing a reply-to post should one use second-person (“you”) or third-person (e.g. “Ton”) to refer to the author of the original post?)