In my last post, I talked about why I believe AT Protocol represents the future of scholarly communication. That's a big claim, so in this post, I want to unpack it all.

The problems in academic publishing are well-documented: proprietary platforms lock in content and communities, researchers lose control of their work, fragmented systems can't talk to each other, and opaque processes and proprietary platforms centralise power with a handful of gatekeepers.

Authors, editors, reviewers, funders, librarians, and institutions alike have been living with these issues for decades, but the fundamental ecosystem architecture has mostly remained unchanged. That situation is unsustainable and frankly, with all the global issues we face, it's unacceptable.

I believe that AT Protocol offers a set of feasible and viable technical enablers that can and will reshape how research is created, shared, verified, and preserved.

Verifiable Identities

AT Protocol uses decentralised identifiers (DIDs) to create persistent portable verifiable identities that users fully own and control.

My DID is did:plc:s2rczyxit2v5vzedxqs326ri and connects to a human-readable alias (handle) @renderg.host (which I personalised) so that people can connect with me and recognise me everywhere on the network.

Connect with me on Bluesky →

Unlike ORCID IDs and institutional email accounts, DIDs aren’t tied to any single organisation or platform. This means that reputation and attributions travel with the researcher throughout their career and remain valid regardless of affiliations and organisational changes.

When someone changes institutions or a platform disappears, their identity—and their record—stays intact. Editors can verify contributors directly. Funders can trace outputs without juggling fragmented systems.

Portable Personal Data

In AT Protocol, all your content, activity and connections live in a data repository that you control. A personal data server (PDS) hosts your data and connects you to the network, but the data itself is portable. This means you can move between hosts without losing your data, connections, or history.

Browse my personal data on Taproot →

Institutions can run PDS hosting services for their communities while researchers retain full ownership and portability. When a host changes its systems, alters its terms, or shuts down, you simply move your repository to a different host.

Users may rely on institutional or third-party hosts, but unlike today's platforms, you are guaranteed the ability to leave. Your repository belongs to you so moving is not a negotiation.

For researchers, this means a complete authenticated scholarly record—publications, reviews, datasets, connections and more—exists as a unified self-owned asset rather than fragments scattered across platforms. Librarians can long-term preservation without depending on commercial platforms. Editors and reviewers can maintain complete contribution histories independent of any journal or publisher.

Open Shared Lexicons

The AT Protocol defines open shared schemas (lexicons) that describe how content is structured and exchanged, such as Bluesky posts and likes. These could also cover articles, preprints, datasets, reviews, citations, and other forms of scholarly output.

Browse hypothetical lexicons for scholarly communication →

This means real interoperability across platforms and tools is possible. No more proprietary formats or closed ecosystems.

—

📚 Shared book lexicons mean my reading list on BookHive also magically appears on my Popfeed profile.

—

A preprint server, institutional repository, and journal submission system could all work with the same article format. Datasets link to publications through standardised relationships. Citation indexes become queryable across platforms. Funders can track outcomes by reporting on consistent metadata. Research integrity investigations can follow documented trails across the entire network—no more chasing down proprietary exports or scraping publisher websites.

Currently, different platform uses different formats and APIs, creating massive friction and lock-in that only benefits vendors and their expensive broken and outdated systems.

Shared lexicons eliminate this fragmentation, letting the community define standard formats and build universal tools and services that can work everywhere. Effective schemas require agreement between researchers, publishers, institutions, and funders. Early decisions matter, and the community needs governance and cooperation to develop standards that serve everyone.

Labellers

Labellers* are independent services that can apply metadata (labels) as signals to content and users on the network. They could be used to mark peer-review status, flag integrity issues, classify topics, verify credentials, or indicate quality—whatever signals a community finds valuable.

Browse some existing labellers →

This creates distributed quality control without central gatekeepers.

A scholarly society could run a labeller certifying domain expertise. A research institutions could flag ethical compliance or replication status. Funders could tag grant-supported work. Preprint servers could mark editorial assessments.

Multiple labellers can evaluate the same content from different perspectives—methodological rigour, epistemic status, reproducibility, clinical relevance—giving readers layered, transparent signals rather than a single journal's brand as a proxy for quality and integrity.

Currently, this signalling is almost entirely tied to where something is published. Journal prestige stands in for peer review quality, editorial standards, and research integrity—but these are opaque, inconsistent, and controlled by publishers.

Labellers make quality signals explicit, verifiable, and competitive. Researchers et al can see who evaluated what and why, then decide which signals they trust.

Feeds

Feeds* are algorithmic filters that anyone can build and subscribe to.

Instead of a single platform algorithm deciding what you see, you can choose from multiple feeds with transparent, customisable logic.

Browse my custom feeds for Calls to Submit, Latest DOIs and Trending Academia or Build your own Feed

Currently, discovery is controlled by platform algorithms optimising for engagement, journal email alerts with limited filtering, or manual searching across fragmented databases.

Feeds make discovery flexible, personalised, and shareable—communities can build specialised filters and experiment with discovery mechanisms that reflect how research actually works, rather than adapting to generic platform features often biased towards monetised engagement.

  • Researchers can subscribe to feeds surfacing new work in their subfield, papers citing their work, outputs from collaborators, calls for submission or talks at upcoming conferences

  • Librarians can curate feeds highlighting institutional research or open datasets

  • Funders can monitor grant-supported outputs

  • Research groups can share feeds of essential reading or emerging methods or protocols

  • Publishers can identify preprints as candidates for peer review

Imagine discipline-specific feeds, citation-based recommendations, methodology filters, replication status trackers, epistemic categorisation, and more... The potential is limitless:

*Although Feeds and labellers are not officially supported by AT Protocol, Bluesky is showing the potential through their implementations.

Join the ATProto Science Community

ATProto Science
ATProto Science is about using ATProto for science applications, such as publishing, curation, data sharing or social discourse around research.
https://discourse.atprotocol.community/c/atproto-science/22/none