Obsidian I/O solutions
Introduction
Working with markdown documents, Obsidian is an amazing runtime and while I make sure everything can be done without it, it's often the most forward way to do it.
So how can we use Obsidian without being in obsidian?
Remote execution
Let's look at a few ways to do this.
Most of them will involve the Advanced URI plugin.
This plugins allows us to extend the base URI scheme of Obsidian to do a lot more.
The URI's can be consumed both from within or outside of Obsidian.
While a lot of commands are meant to bring the user to the right place in Obsidian
xdg-open "obsidian://advanced-uri?vault=myVault&filepath=myUUID&heading=SomeHeader"
Some could be used for silent and remote exectuion in the background.
Built-in command
First, the most obvious thing is to use the commandid
xdg-open "obsidian://advanced-uri?vault=myVault&commandid=myCommandId"
We can then registed a templater
or ideally a QuickAdd
macro as a Command, get its ID and remotely run it this way.
Macro
This requires to register both a new macro and a new command then get its ID, not the most straighforward.
The following javascript snippet executes a QuickAdd macro without needing to be a register command.
quickAdd.executeChoice("base");
Using the eval
argument of the Advanced URI scheme, we can send the signal to remote execute a macro without polluting our command palette with dozens of "silent" commands.
But there's a lot more interesting things to do with the eval
argument.
Evaluating arbitrary code
Unfortunately this still isn't a fully programmatic way yet and required us to register a new macro with the plugin first.
Let's try something else.
Instead of writing macros in Javascript in Obsidian and remotely trigger that code, why not write the macros outside of Obsidian and send them as is?
Of course, sending longer scripts across a URI is going to be an encoding minefield, so let's take some precautions first.
We could also consider using some Javascript specific tool to minify further first.
# Assuming your JavaScript is in a file named script.js
base64_script=$(base64 -w 0 script.js)
uri_encoded_script=$(printf '%s' "$base64_script" | jq -sRr @uri)
uri="obsidian://advanced-uri?vault=YourVaultName&eval=eval(atob(decodeURIComponent('$uri_encoded_script')))"
xdg-open $uri
This will take the content of script.js, encode it and remotely execute it with AdvancedURI.
To go further, we could create a simple script that takes 2 arguments, a vault name and a script location.
Unfortunately this will probably reach some limitations as well eventually.
As Obsidian and/or bash reach their limit in URI or command size, I am not expecting this solution to work forever.
Send a file to be evaluated
Breaking away from this limitations, why not have the longer JS code on disk, inside or outside the vault and then the only signal needed to send Obsidian is said filename
let fileContent = await app.vault.adapter.read(filePath)
Although there are a few rough edges with the way electron/Obsidian seems to be handling async in the context of Adavanced URI's sub-processes, this leaves 2 issues.
- It is hard to build an abstraction where macros and function can be built atop of each other.
- We are only sending signals, we're not getting return values.
Let's focus on the second one.
Chasing a return value
Disk caching
The first obvious solution would be to write to disk at an expeced location, have the external process consuming the URI wait for the file to created/updated at said location.
We can trigger the following functions through any of the previous method we covered to get a JSON object out of a DataView query.
async function getInboxPagesData() {
let t = dv.pages("#inbox");
return "function" != typeof t[Symbol.iterator]
? (console.error("Dataview query did not return an iterable object"), [])
: Array.from(t).map((t) => {
let e = Object.fromEntries(Object.entries(t.file.frontmatter || {}));
return (e.filename = t.file.name), e;
});
}
async function writeJsonToVaultFile(t, e) {
try {
let a = JSON.stringify(t, null, 2),
i = app.vault,
r = await i.adapter.exists(e);
r
? (await i.adapter.write(e, a),
console.log(`File ${e} updated successfully.`))
: (await i.create(e, a), console.log(`File ${e} created successfully.`));
} catch (n) {
console.error("Error writing file:", n);
}
}
async function main() {
try {
let t = await getMinimalPagesData();
if (0 === t.length) {
console.log(
"No pages with #inbox tag found or unable to process the query result.",
);
return;
}
let e = "inbox_pages_data.json";
await writeJsonToVaultFile(t, e),
console.log(
`Data from ${t.length} pages with #minimal tag has been written to ${e}`,
);
} catch (a) {
console.error("Error in main function:", a);
}
}
main();
Then with an external script, watch the file:
# Wait for the file to be created or modified
echo "Waiting for the file to be created or modified: $FILE_TO_WATCH"
inotifywait -m -e create,modify --format '%w%f' "$(dirname "$FILE_TO_WATCH")" | while read FILE
do
if [ "$FILE" == "$FILE_TO_WATCH" ]; then
echo "File updated or created: $FILE"
break
fi
done
or in Python:
file_path = Path(file_to_watch)
while True:
if file_path.exists() and file_path.stat().st_mtime > time.time() - 1:
print(f"File updated or created: {file_to_watch}")
break
time.sleep(1)
We could push it even further by building a queue using the filesystem, where each task is a timestamp as a new file or line in file for instance.
Alternatively, a script could run in a loop, watching for file changes and update a file on disk, then something like this could be reading it.
Is it time to REST?
But maybe it's pushing the limits of what URI's are supposed to be for.
Maybe there are better methods for this back and forth.
REST comes to mind. But that means auth, networkings etc.
It seems to be about as cumbersome.
Socat
What about IPC sockets?
It's fast, secure and local
Let's explore it