Reflection on a few decades
By R. S. Doiel, 2025-12-31
I have been thinking about my career recently. It extends into the prior century. I have also been thinking allot about the Internet and the evolution of the Web. I see a pretty consistent thread in my career. I write software that transforms content and information from one system into another. Sometimes it's structured data and sometimes it's unstructured text. The thread became clearer with the arrival of the World Wide Web in the early 1990s. While the systems used over that time have grown complex I think they can be made far simpler. I think they can be better implemented for writers. They can be writer centric. Finding a path to something simple has preoccupied my nonworking hours for much of 2025.
A lesson from my past
An early set of programs I wrote for the Web were written in a language called Perl. My University had a "mail form handler" service. That service would take the a web form submission and sent the results to your email account. I wrote a simple mail filter that would extract email's web form content and append the results to a CSV file. I had another Perl script that would read in the CSV file and generate a set of HTML calendar pages in the directory that the used for a personal website. It worked pretty good. It was so trivial I didn't keep a copy of it or port it to a new language when I stopped writing software in Perl all those years ago.
This little calendar project lead to career opportunities. I didn't find that out to much later. A colleague mentioned I was considered a hacker (in the good sense of the word) by some of the system admins because I had done that. It was just one of many experiments I've done over my years working with computers and the Web.
That personal project taught me several things. I've kept them in mind over the years.
- Composible systems are powerful, Unix combined with the Web is composible
- Structured data aids in composibility (EMail and CSV are only two examples)
- Systems get used in unexpected ways
No 3. has proved important. The web can be viewed from both the audience of one and the audience of the general public. You can also split the authoring system from the presentation system. You can divvy those up between machines. It all depends on the project scope, rates of data change all combined with the constraints of scale.
When I first started maintaining and building web publishing system I almost always leveraged a database management system too. This was particularly true when LAMP emerged as the common platform. Today that isn't the case. I avoid relying on database management systems unless there is a clear requirement for them. Most of the time there isn't.
Databases were convenient because they make working with structured data easy. Database records, even NoSQL database records have a structure. The unstructured document can be worked into structure using indirection. A path can point to the unstructured thing. Links are a form of path. A link provides a structured reference to the unstructured object. You often can simply encapsulate the unstructured content as an element in the structured record. This happens in RSS feeds. Structure allowed us to use a databases to enforce record locking behaviors. That was needed to avoid write collisions. Originally databases made it easier to write web applications. That isn't necessarily true today.
Eventually the bespoke web publishing systems became content management systems. Bespoke gave way to more standardized software like Movable Type, Zope, WordPress and Drupal. Those arrived in what I think of as the second Internet age, the age of commercialization.
The Commercialization Era
When the Internet and Web went from an academic and research platform to a commercial platform many things changed. The Internet and Web came to be consumed by monetization, business models and greed. Those enabled the surveillance economy we have today.
With these new drivers new demands came to the forefront. One big side effect was the Internet grew very large very quickly. By the time the first iPhone was announced the Web was already experiencing growing pains. One of the big issues was spikes in Web traffic.
In the early days we assumed a few dozen concurrent connections. By 2000 you needed to handle much more. Buy 2005 you worried about tens of thousands of concurrent connections. The problem was even large enough to be named. It was the "10K problem". It was talked about by web people in the same way the old aerospace engineers used to talk about the sound barrier.
Money was on the line so people tried various solutions. Some were hardware approaches, some content approaches others were changing how we implemented web services generally. Eventually the general purpose web services like NginX and Apache 2 easily handled 10K concurrent requests. The next worry point was the worry was 100K, 1000K, etc. That came to be called "google scale". Probably should have been called global scale. Global scale is one of the reasons that distributed systems came to dominate the commercial web. That is in terms of practical implementation, the use of the web though was using those distributed services and monolithic systems. Monoliths are easier to monetize. Monoliths and scaling up are only one possible direction to push.
What hasn't been acknowledged since the 10K problem era is that we're may be approaching peak human use. Think about it. The web has become global since 2000. Most people on the planet experience the web today either in embedded systems (appliances) or their phones. Cell phone saturation has been a reality for manufacturers for a decade in many countries. I suspect the continued rapid growth in traffic isn't direct human interactions anymore. It's software systems.
From my own work experiences the growth in traffic to our websites (I work in at a research library) has been bots. It is not human users. Not by a long shot. Bots are software systems. Early on bots tended to be useful. They were dominated by web crawlers. Web crawlers either mirrored sites (e.g. for preservation) or gathered what was needed to create search indexes. While they could be unruly norms and expectations developed making it possible to respond to the bots without delaying responses to real humans. The bots were largely useful. Since 2024 the problem has been AI bot swarms. These bots were initially used to harvest web content to train large language models. In practice they were a use to deny service engines. The continued goal appears to be deny access to website to competing bots and humans. The big monoliths have to protect investments in capturing attention after all.
Scaling up has lead to more bots. Maybe we should be thinking about scaling down instead of up. Maybe we reach the global by focusing on the local.
Observations on website construction
Since at least 2005 there has been a gathering renaissance of static websites. There has been many drivers. Static sites often are easier to scale for high volume traffic. They are easier to secure against attacks and fix if defaced. Often they are much cheaper to run and maintain.
Several innovations helped this happen.
- Template systems are easy to build, I've seen lots of these come and go
- Light weight markup adoption, like Markdown, provides a source for generating HTML that is easy for writers
- Cheap to rent static storage systems became commercially available (S3)
- The web browser went from a green screen surrogate to a full on rich client. Web browsers even work on phones now
Observations on computer systems
In 2025 commercially available computer systems still use a Von Neumann computer architecture, a CPU, Memory (RAM) and storage. That model also assumes inputs and outputs that are distinct (people forget this). The scaling problem observed in the commercial era often presumes the input (writes) and outputs (reads) are used at the same frequency. In practice the rates of reads and writes diverge. In data consumed, read, more often than it is written. That's true for us humans or as well as for software. It is why it is important when designing web systems to consider the rates of data changes just as we make choices constrained around scale. It is really important to challenge our assumptions another rates of data changes and if that is constant or not.
My experience is that rates of change also change. That is true in both in the software implementation and in data. It is important in guiding how we approach the lessons of scaling. Curation requires more dynamic responses while stability for consumption is simply a reflection of current state problem. Over time the last current state tends to stabilize for many communication use cases.
The needs of the monetized Web have delivered an out sized concern with scaling up. This in part has been a result of over reliance on dynamic content web systems. This is particularly true if the goal is to create and encourage the monoliths like the walled gardens most people use on the Web today. You can scale writing separately form reading. In fact that leads to more options in scaling. Unfortunately most developers and many engineers fail to see that distinction. We keep building complex monolithic systems even when we compose the monoliths from distributed systems.
Scaling up appears to be the goal, a single direction of scaling. I believe this is a false lesson. You can scale up as well as down. Scaling down can be an advantage.
Are we entering a new web era yet?
Long before "Web 2.0" there was already a cyclic hype about the next web or the web's replacement. Most of the time this is driven by some sort of technological change, buzz or approach. It tended towards peak hype as it was monetized. There is a good share of doom and gloom along side Utopian thinking in these hype cycles. The hype cycle doesn't actually match the actual changes most of the time. It isn't even when the impacts are always felt. At some point we will arrive we do enter a new era. The current status quo is very problematic, how do we get to the new one?
My bet, Simplification
Why simplification? Why is this both desirable and possible? How does that lead to a new era of the Internet and Web? The Internet and Web were built as distributed systems. Those specific technologies are largely intact. Commercial interests may not align with distributed systems but globally a peer to peer structure scales. So that is how we've implemented the global Web. On the other hand a lesson from the late 20th century Web is that unregulated markets, unregulated services, tend to collapse towards monopolies. Sometimes they just collapse and cease to exist. Centralization as some advantages if the goal is wealth accumulation. Yet centralization makes them brittle and precarious. In 1990 there were not regular global head lines of Internet outages. In the 2020s this has become common. Eventually decentralization embraced again as our Web habits evolve. Given the underpinning technology of the Internet and Web remains distributable I don't think reinvention is a requirement to usher in the next era. The easier path already exists.
The next web isn't likely to be based on block chain, bitcoin or large language models either. Why? None of these really enhance communication. The Internet and the Web are communication systems. Computers unlike their calculator predecessor are communication devices. That telephone in your pocket is a computer after all. Making it easier to communicate is likely to be the source of the next web evolution. How do we make it easier to communicate? We make it simpler.
Market consolidation does not lead to innovation no matter how many startups are bought by conglomerates. I think people are where the next web happens. It's happens at a human scale of a few people at a time. If there is enough critical mass of people doing the "new thing" then that's when the new web will arrive. It'll have roots in the old web, may even be technologically indistinguishable but it'll be different because people choose to use it differently and think about it differently.
I think the next web needs to be more personal, more authentic, less about algorithms. It needs to be more about human choices. General automation by autonomous agents is not necessarily needed. I think the new Web happens as a a countervailing force to the current onslaught of centralization and control. It think the new Web will happen in spite of centralization and control. It will not require W3C blessing because it doesn't require new specifications and agreements. It requires new individual human interactions.
There are elements I see as being factors in reasserting decentralization. There has been a computer revolution happening that has largely gone unnamed and unnoticed by mainstream media and society. That revolution is in inexpensive single board computers. Some of it has been dismissed as fad in the form of the DIY maker movement. Before that it was buried under the commercial hype of the Internet of Things. Single board computers are real. I've been using them for years. I'm typing this post on one now. They are web friendly.
then and now
My first computer cost me about $1800.00 US dollars. My first computer was a clone 286 machine. I ran DOS and Minix on it. I had to go out buy the parts and assemble it myself. This was back in the late 1980s. A mini computer that could host a website in the early 1990s range between $50,000.00 to $250,000.00 US. The cost of a house at that time. Prices came down and eventually personal computers could run websites directly. The price of a web server settled down to about I paid for my first personal computer.
Fast forward to now. For $1800.00 US dollars I can easily assemble four Raspberry Pi 500+ workstations, a 16 port network switch and Ethernet cables to connect them. I can grow that network with web servers running about $150 to $200.00 US per server depending on the storage I buy. A single Raspberry Pi 5 with solid state disk storage is enough to host a website with concurrent users in the thousands. What that 1990s era mini computer did for $50,000.00 US can be done for a $200.00 web appliance today. That's an important change.
Here's how I came up with my $1800.00 budget. A sixteen port Ethernet switch from Netgear runs $100.00 US when I checked Best Buy today. A Raspberry Pi 500+ runs at $200.00 US plus tax and shipping. A Pi Monitor runs $100.00 US. Add in cables and such you'll looking at about $350.00 per work station. That's how I know I can build a four workstation network with room to expand for a $1800.00 budget. I don't even need to assemble the computers I just need to connect them up and power them on. Each machine can be either a web host, workstation or both. My choice. Expanding that system is also low coast. You can build useful web servers for less than the price of the Raspberry 500+ and monitor.
My little four workstation network is enough to hardware to support a neighborhood paper. Adding a few more machines and you can provide hardware for a small town paper. You just need the physical space to house them and of course the writers and graphics people to use them.
Unlike the small town newspapers when I grew up there are no printing presses to purchase, rent or maintain. There's just several computers. The computers I listed last decades. This suggests something to me. Maybe the hardware we have is good enough for the next web. Maybe hardware isn't the issue it once was.
The cost factor in the next generation of the web doesn't require huge expenditures of capital. It doesn't even require big data centers in spite of what the Big Companies will swear are required.
It suggests to me that we can scale down. We scale globally by going small. Heck, your "smart" appliance is likely has an embedded web server and it's doesn't pack a huge amount of memory or a fast clock cycle. Appliance manufactures don't put in any excess compute capability into appliances. That cuts into profits. The Internet of things, the "smart" appliances, have been enabled by the cheap single board computers. In the meantime the foundations of the Internet and Web remain intact and those little machines fit it just fine.
The trick is to making the web really distributed is to allow it be both affordable and easily editable. I am skeptical of centralization as the means of achieving that. I think we get there by embracing the little single board computers like the Raspberry Pi. We embrace individuals owning the hardware, software and the content they produce. We embrace individual control so that collectively we have a share medium. I think the model where we rent a connection, we rent our presence is a broken one. What happens if we don't need to pay a toll to use the Internet? What happens when it is as convenient as the public sidewalk?
The small set of network computers I described run the same protocols as the Internet and Web. If I connect them up to the switch I have a local private version of the Internet and Web. Let's call that an internet (common noun) and a web (common noun). What would happens when I connect my web to my neighbors network? What happens when they connect theirs to another neighbor? When does that arrive at the proper nouns, Internet and Web? Maybe the next web is out there in the small networks interconnected.
Moving beyond commercial Internet Service Providers
When the Internet was commercialized it was done so using the justification that Government couldn't afford giving the public access to the tax payer funded Government created resource. The claim was only private business could do that. Not surprising considering the political wisdom of the 1980s and 1990s. That wisdom suggested all problems as business problems and markets solve business problems. Many dubious assumptions by a whole bunch of smart people.
In practice that approach has been a failure. Try to use a cell phone in rural America today, chances are it'll fail. The Internet for most Americans is experienced through their cells phones. Heck, in many urban areas cell service remains really problematic. It is common to get poor cell service through out most of Los Angeles county. Some of that is geography (Mountains) but much is also the lack of investment by business as territory was divided up among the decreasing number of Internet Service Providers.
In response to market failures and the continue push to put everything online and people have started to pull together to find ways to cope. An interesting approach has been community owned nonprofit internet service providers. If you're a town with less than 1000 people you're unlikely to have many business offering service at affordable rates. The best you might due is satellite connections like Starlink. Satellite access is slow because of transmission distance. A fiber connection is so much faster. This is a mater of physics, radio signals travel slower than the speed of light. For fixed locations like farms, homes and small businesses fiber gives you more reliable connections.
What has started to happen in some rural areas is communities lay their own fiber between houses or network cell towers. Then they connect these to a location that also has an Internet connection with significant bandwidth. If a railroad runs through their town then that might be used as a connection point (Rail systems have used fiber optics for switch control since the last century, they sometimes sell their excess capacity). Similarly a large antenna array can carry more data to and from the satellite then a home system like Starlink can. By pooling the communications together this becomes an affordable option. A public nonprofit service can do this.
Rural community cooperatives can get by with minimal staffing and little in the way of physical space. They function like other small business in terms of budget but since they are nonprofit they don't need to enhance the cost charged to members nor do they need to pay a competitor off to avoid competition.
A policy of encouraging lots of little cooperatives is technically doable. It is how the Internet was designed. Like public streets and highways communities can choose to provide board band access that connect to the Internet. Organizations like Institute for Local Self-Reliance show examples of this approach.
What else is needed?
The act of producing a website needs to be simplified too. Ideally it'd be a turn key system I can run on my Raspberry Pi sized computers. That turns out to be incredibly easy when you think of the content system as being single user.
I've notice about a decade and a half ago people seemed to rediscover "static" web. This appeared to be driven by a reaction to the walled gardens, to privacy costs and the desire for local control. The static web is a bit of a misnomer. It refers to how the content is hosted not how the content behaves. A static website in 2025 can be highly interactive.
During the first dot com era (approx. 1995 through 2000) "dynamic" websites rose in popularity. While the web browser was becoming a rich client allot dynamism happened server side. By 2004 and 2005 the rich features provided by web browser were being leverage in "web properties" like GMail and Google Maps. At that point the static approach become a misnomer. The web browser can run whole applications. You can approach site generation like just like you did in the original web and still have a dynamic experience. By 2010 this was well understood. By 2020 it was taken for granted. I don't think the public realizes that the web they experience on their phones is produced in some part by static websites. All those "apps" that run on your phone that use webview are simply websites and services. Many statically hosted.
When JavaScript and CSS became widely supported features in web browsers they allowed the client, the web browser, to be independently dynamic. In terms of how we call things it made it much harder to explain that a "static" website doesn't imply it can't be interactive. In 2000 providing search for a web site required a hosted service. Today you can run the search service inside the browser itself. Indexes are updated when you generate the static website. The indexes are partitioned and retrieved by the browser as needed. Unless the website is Wikipedia size you can just provide it browser side. The term static site remains but it's not limited to the approaches used by the retro or nostalgic web.
The way the walled gardens are built and scaled ultimately comes out of the tried and true lessons going go back to the 2000s (if not earlier). That tells us something. We need more of a new vision than inventing technology to move forward.
What happens when we separate the acts of writing, the acts of content curation and the website delivery? We have an opportunity to radically scale down after decades of scaling up. In the process I think we can discover a more human and simpler approach to a sharing the human presence online.
The Open Web, the Web or a web?
Many weeks ago I read a short post by Dave Winer talking about the terms Indie Web, Open Web and the Web. I wish I saved the link. I think he was on to something. The Web has not gone away. It is still here. The walled gardens have been very successful in dominating marketing and news cycles. Many people assumed the destination must be a walled garden because of the network effect. Maybe that is only a general perception and not a fact. In 2025 I have come to the believe that "network effect" is over hyped, maybe just effective marketing.
I can call anyone from my telephone regardless of what phone company they use. I'm not limited by the big players like AT&T or Verizon. That is true because the phone systems use common protocols to route calls and data. The Web is like that too. We need to start thinking in terms of the common protocols that work. We have allot of them. I would go a step further and look at simpler protocols and see how far we can push those. I don't think the answer is the social graph or Activity Pub pub. Those are marketing tools not communication tools. I don't believe it is ATProto from Bluesky is the solution either. The solution that does and has worked for me is HTTP and RSS. It's worked for me since RSS was formalized. It hasn't stopped working since even though the marketing gravity has moved elsewhere.
What has failed to fully materialize is content systems oriented to both inbound and outbound RSS. WordPress comes close and I see that as part of Dave Winer's excitement about WordPress as the API for writing for the web. But I don't think content systems necessarily need to be web based. I certainly prefer using my text editors for writing than typing into a text box of a website even if it does implement a WYSIWYG editor option.
A step in the right direction is to empower the people who create content. It is to empower the writers. The software should allow our writing to be easily integrated into a website. Where that website winds up being hosted shouldn't be determined by that software.Especially if it is a walled garden like Substack and Medium.
Scaling down
The scaling down of the web is an under explored in topic. I think it is worth exploring. I suspect it'll be my continued focus in 2026.
I want a simple piece of software the manages the content and renders the website. It should allow me to easily compose the site from pages, posts, text blocks and feeds. I want it to be light weight and specifically I need it to work well on my Raspberry Pi 500+. Preferably it should work equally well from a Raspberry Pi Zero 2W. I'm looking for a small core feature set that is easily explained, easy to think about and gets the job done without allot of fuss. I think the system can be perfectly functional as a single user system. Focusing on a single user system allows us to simplify the whole process.
Communication is a social act, doesn't that mean we need multi-user collaborative system? A single user system can be collaborative. Two pieces of tech enable the single user system to be used collaboratively. The first that comes to mind are RSS 2.0 feeds. These let you move Markdown between content systems that support inbound RSS easily. That transported Markdown can be repackaged and republished on another site. The second is distributed version control systems. Git happens to be the current popular one. If I work on my copy, you work on yours we collaborate when merge the differences between our to copies. We do this in software development all the time. It amazes me that this feature is rarely available to writers outside the web.
In terms of inbound feeds I would choose to read RSS, Atom and JSON feeds as a minimum but produce only RSS 2.0 feeds. Why? Because a lesson learned from the development of the Internet was adoption is enhanced by being liberal in what you accept for inputs but strict in what you produce as an output. RSS 2.0 is extensible. It has a track record of successful extension. Just look at Podcasting. Podcasting was possible because RSS 2.0 had support for enclosures and the point at the non-XML, non-text content. That's can be an audio file, video, images, you name it. Similarly the practice of supporting Dave Winer's source element means we can ship Markdown content right there in the RSS item. I don't have to send it as an enclosure. The RSS can be used as a safely transport Markdown content. Markdown, stripped of any embedded HTML, can be safely rendering by anyone else. On the receiver end render Markdown to get back standard HTML. It's just a matter of thinking differently how we use this venerable RSS format.
Here's an example of applying RSS in practice. The walled garden news sites feature content written and hosted elsewhere. Google News and the not quite dead yet Yahoo News are good examples of this. Microsoft Windows even takes this approach in its screen saver notification display. How do they get the news content into their aggregated news pages? Why they read feeds, RSS, Atom and JSON feeds. The take the feeds, normalize them and then output HTML as needed. Does this approach require a big company to this pull off? No. Does it require that you read the news in a web browser? No. For years I read news in my terminal using Newsboat. It's not a web browser.
Googles doesn't hire writers to write the news stories they re-purpose the content provided by other people and organizations. The only thing that might be novel is that a filter the aggregate of feeds such they you are highly likely to click a link in the feed results they display. It's the same approach as search engine results in terms of monetization. We can take advantage of that too.
For several years I've run a personal news aggregation site, Antenna. Currently that site is generated from close to one thousand separate feeds. It takes a few minutes to harvest, a minute or so to rendering as HTML and another minute to publish to the website via GitHub pages. I run the harvest once a day and I can read at my leisure. I also run a copy on local home network. My local network version updates every hour or so. I can remember the last time I bothered with Google, Yahoo or Window's news. When many friends left Twitter I started following them on Mastodon because Mastodon produces RSS feeds. When people showed up at Bluesky I followed them too because Bluesky supports RSS as well. In theory I could follow something in Meta's threads this way if I wanted, I just don't know anyone there.
The computer I use to aggregate feeds as well as stage my website is a Raspberry Pi 3B+. It runs headless (without a monitor and keyboard attached). The storage is an SD card is small. With case it is the size of a bar of soap. I think it has about a Gig of RAM, a few CPU cores. It's not a big computer. It's all that I need as a web publishing platform. Scale small. I think that 3B+ costs less than $50.00 when I bought it and that included a case and power supply too.
You don't need Google, Meta, Microsoft, Apple or Amazon. They are not necessary for the web or a web presence. At most what they sell is a convenience. I find myself increasingly questioning the cost of that convenience. Running scp on a schedule is probably easier than dealing with Git, branches and versions.
What I dream of is a feed oriented content management system I run on my inexpensive single board computer. One that I can chose to copy to the public Internet and Web or keep private on my network. I've spent much of 2025 thinking through this problem have been exploring how to make it a turn key system I can share with others.
What I've been up to
In my spare time I've been reorganizing my personal websites and the tooling behind them. By August 2025 these efforts had resulted in a project I call Antenna App. The first step was to wrangle how I built my personal news site, Antenna. Currently it is hosted via GitHub pages. It was fortuitous that I started aggregation. It made the resulting content management system both simple and flexible. The initial versions of Antenna App provided for harvesting feeds and then publishing them as HTML pages in a static site directory. An approach similar to my calendar app all those years ago. Antenna App was inspired by the simplicity of Newsboat and it's use of SQLite3 databases for content collections. After working out my aggregation approach I added support for simple blogging. Posts become items in an RSS feed, they just require an extra step to render as individual HTML pages. This left me with a feed oriented core. I can aggregate, I can render posts and output the combined feeds as well as well a generate lists of feeds shared as OPML files. What was missing was support general case web pages so I added that this past Fall. By November 2025 I was generating my personal website, a club website and my aggregated news site using Antenna App.
Command line programs are nice, they tended to be focused. There is a problem encountered when sharing them. You need to learn the command line syntax and that can put people off. I want people to realize that web publishing can be as easy and typing up a blog post. In December I started working on transforming the Antenna App command line tool into an interactive tool. Getting that completed will happen in 2026. I think it will get me closer to what I think is needed to get the Web out of its current walled garden rut.
What I hope is next
The rising popularity of static site generators has given rise to Markdown becoming widely known and well documented. It has also come with some missed opportunities. There are lots of systems that will take Markdown content and render a website with it. It's triggered an explosion of web publishing on GitHub before Microsoft purchased them. It continues as a common use case on GitHub to this day. The problem is the Git is complicated and Microsoft is in the process of turning GitHub and VS Code into another walled Garden with AI gatekeepers. That sucks. I think we can move beyond the GitHub use case. I think its time to move beyond complex rendering setups and scripts.
What I think is missing are simple, turn key, feed oriented content management systems that are is designed to run on your own computer. I think that could be a "killer app", especially if they run on low coast single board computers like the Raspberry Pi. Technically I can run a local copy of WordPress on a Raspberry Pi. In practice WordPress is a hassle as well as a resource hog. Why run a multi-user system if it is only going to be used by single person? Running WordPress means running and maintain PHP, a web server and MySQL database. That isn't really necessary for managing content at the single user level. I think we can do better than that. I think we can create something far simpler than that.
I think there needs to be single user content systems that can read and write feeds. Reading feeds gives us the ability to aggregate. Writing feeds allows us to syndicate. I think inbound RSS is a reasonable means to recreate some of the useful features offered by the centralized walled gardens we call social media. If I can take an item from an inbound feed, repost it or write a post quoting it, I've got the seed needed to be social. That's the missing bit of the RSS ecosystem. I don't require a Twitter or Facebook systems at all. I don't require Substack or Medium. I just need to host my content, publish to public web and be clever about getting the word out. I think the concept of an inbound RSS feed is useful and worth exploring. Not everyone agrees though.
Andy Sylvester wrote a thoughtful rebuttal to Dave Winer's request for inbound RSS support in content systems like Bluesky, Threads and Mastodon. I happen to disagree with Andy but he made a very good point. Writers haven't insisted on inbound RSS support. It is not on their radar at all. I think that due to the fact that writers haven't insisted on decent content management systems either. They've lived with cut and paste for so long they just assume that's the way it has to be. Most writers I've met deal with content systems as a price of publishing their writing. These systems function at the minimum "good enough" standard. What if writers didn't have to pay that price anymore? Would they write more?
To appreciate inbound RSS people need to see an actual working system. I think you need to have a feed oriented content management system to start with. Why create such a system? Writers are also readers. Most writers I've met read as much or more than they write. Aggregating feeds are an easy way of pulling your current reading material into one place. A feed oriented content management system should make it easy to read as well as to publish. In fact it should be trivial to support both on a website side by side.
Today most web content management systems are also web based. That is just how they evolved. How they dealt with challenges of divergent computer systems, operating systems and displays. I can right a application in Go or TypeScript and compile with Deno and deliver native executables for Linux, Windows and macOS. I can deliver them for both Intel style and ARM style CPU. That used to be really hard. It's easy today. I don't need the web to provide the interface to my content system, it doesn't limit the computers it can run on. We can be a little retro in providing native applications that leverage the operating system provided "terminal" applications without allot of the prior challenges.
Unlike the Web based content systems where you must provide an embedded text editor a native application can focus on just managing the content and rendering your website. That makes the content system simpler simpler. In a locally run content management system your text editor is independent. You get to use the one you prefer. In 2025 Text editors are like typewriters of the last century. Writers have their preferences. Why take that away from them? What the content management system's role is taking the manuscripts and turning them into a publication. A manuscript might be a few words or a novel. On the Web we call manuscripts posts. In practice their just a simple text file. The length and role of the post is up to the writer. I think the real role of a content system is to manage content and assemble the publication.
I've used many editors over years on many different operating systems. The ones I reach for today include Gnome TextEdit and aretext. Ask the next writer and their pick will surely be different than mine. Ask me month from now and my choices may have changed. I tended to use different editors for different tasks. I like a vi style editor for editing content. A nice mode less editor is initial drafts. But that's just me.
That's one of the reasons I don't run WordPress on my Raspberry Pi. If it was just rendering Markdown to HTML that'd be an option. Wget mirrors websites easily. But running MySQL, running Apache 2, managing PHP updates, using Wget and being forced to an editor I don't really enjoy? That's too much. While I admire WordLand and how it expands the possibilities of WordPress I prefer writing in my terminal based editor.
I used to run web based content systems on my local machine, but when good static website options prevailed I left them and haven't really looked back. I want a content management system to manage my writing not to dictate which editor my current whims prefer. The content system should take care of seamlessly generating HTML, RSS and other files useful files needed for a functional the website. That's it primary purpose, that's it's role in the writing process. I want a content management system that doesn't require me to learn yet another template system or language. I want a system that supports integrating aggregation. I want a content management tool that just works without requiring more knowing than Markdown and a general understanding of how the web works. I don't think I am alone in this.
Inbound RSS, an opportunity
Content management systems like Drupal and WordPress have supported RSS feeds for decades. What's the big idea about inbound RSS? Inbound RSS is the ability to subscribe to a feed and gather the items presented for republication. Today it is easy to get RSS out of social media platforms like Mastodon and Bluesky because the provide RSS feeds of posts. If I want to contribute in those systems and still post on my Web Sites I forced to make some choices. I can use some sort of bridge software to interact with Activity Pub or ATProto or I can cut and paste. These options is a failure in my opinion. Especially considering how easy it is to read and process RSS feeds. WordPress lets has let you import RSS feeds for decades. Dave built WordLand to extend that concept to allow him to use his preferred editor with WordPress. Why isn't inbound RSS an option out of the box without resorting to WordLand and WordPress to make posts on Mastodon, Bluesky and other federated systems?
Inbound RSS has two useful properties. It allows me to host my content on one system and also have it appear on specific platforms like Bluesky. It's liberating to use a writing tool of my choice rather than have to use the Bluesky app or Mastodon client. The second use case of inbound RSS is allowing for comments and reposts to happen outside of a silo like a Mastodon instance, Bluesky or Threads. It allows me to integrate Andy's and Dave's content in my own site in a novel way if I choose.
If Andy were to subscribe to my feeds he could reply to me and I'd pick it up up his rely from his feed. Similarly with Dave's feeds. Where we subscribe and respond on our own site we avoid the problem, nearly impossible to solve, of content moderation. If I want to filter Andy or Dave's feed before I read it I can. If don't want to receive their feed I stop harvesting it. By focusing on enabling individual agency of publication and content hosting we can eliminate whole class of complex, thorny and intractable problems current faced on the walled gardens of the social web.
If inbound RSS was widely support I could easily provide content to Bluesky without cut and paste or specialized bridging software. I could to so just I currently read posts from Bluesky and Mastodon on my own personal news site. Supporting inbound RSS for posts enables a real distributed system that can spread beyond hosted platforms to include ones at the Internet's edge like my home computer. RSS already is proven to just work for syndication. Also platforms are supporting the source name space for transporting Markdown content. It should be trivial to use the same approach wether I am posting to open platforms like WordPress and Micro.blog or monetized platforms like Substack and Medium. As writer I want this feature and interoperability. Systems and companies come and go. Today's content hosted on Substack could be tomorrow's GeoCities. The content will only be preserved when we can easily publish to many locations and platforms as easy as it is to read an RSS feed.
RSS feeds enable an ecosystem that could "just work". It already works for content syndication. It works for Podcasting too. I think the hard part isn't implementation. Smart people work at Bluesky and smart people built Mastodon. the technical barriers are low. I think the hard part isn't the technology at all. It is will power and it's advertising how easy it could be. Its spreading the future more evenly.
I am a single person, I've already built and personal news site that aggregates nearly one thousand feeds. It's more news than I can read every day. I could aggregate more but I don't have time to read more. The software I have written can be used by anyone running Windows, macOS or Linux. I already provide executables for those platforms. I am NOT the first person to implement this. Others have implemented similar systems too. I also know people who use their RSS readers like NetNewsWire and Newsboat to manually create similar websites to my personal one via cut and pasting content. The practice is out there but I think it is largely lost in the noise of the silos. The walled gardens have a vested interest in keep your attention captive. aggregate already I follow the feeds of my friends and peers that are produced by Bluesky and Mastodon. I get them along side the feeds I follow from organizations and news sites. I with a little effort I can follow YouTube channels via an RSS feed. The infrastructure is largely implemented. What isn't is the ease for me to publish an RSS feed and use my feed on my account in another platform or system. RSS can allow us to follow all our friends if it is easy for them to also publish to the web. That's the missing piece and largely the walled gardens have convinced people that only they can make it easy to be present on the web. That's bullshit in my opinion. We can make web publication trivially easy without centralization. We just need to organize our software differently. It not just about me using the editor of my choice, it's about empowering everyone. If we embrace the scale of a single person, we can do that make the software needed to make it as easy to publish to the web as it is to post on Bluesky. You can do that on a tiny Raspberry Pi computer! Inbound RSS is part of that.
I do not have the ultimate answer to kick starting the next web era. I think it is to coming with our without my help. I think it arrives by individuals choosing to do things differently. Sometimes that'll be using existing software in novel ways. Some people like myself will write software to smooth things out. The new era will arrive and improve with experiments by a diverse group of people with diverse backgrounds and skills. I think my bit, my obligation, is to help the process along by writing software that proves this is possible and doesn't require a large corporation or team of people. That we can make this change literally at the individual level. My task is to write software for writers like myself. I think I need to create the software I want as a writer just as Dave has created the software he wants as a writer. The fact that others have created such software is proof of the health of the Web and Internet. We just need to notice it, acknowledge it and promote that shared knowledge too.
A wish list
Here's my wish list (much of it is getting built into Antenna App as my time permits).
- I want to write with the editors of my current choice or the ones I pick in the future
- I want to use the markup I think is appropriate, that should automatically be converted to HTML on site generation. Today that includes the following markup languages a. Markdown b. CommonMark c. Open Screen Play format (nice for dialogues and transcripts)
- I shouldn't have to worry about document metadata when I am writing. I still should be able to easy curate the metadata through out the writing process
- I want to support posts, pages and content blocks.
- The extra bits that make a website really useful should be automatically generated a. sitemaps b. RSS feeds c. OPML lists d. Open Search documents
- The system should be single user and run on a small computer like a Raspberry Pi without bogging down
- The resulting website should be trivial to copy to a public service, e.g. via Git, scp, sftp or dragging and dropping files with the operating system's file manager
These are features I've desired for a long time. They have been pretty consistent since at least 2015. I've written numerous systems that have some of these characteristics. In 2025 I've started to pull them together in Antenna App. I'd like to see other software developers do something similar with their projects. I am very certain there are others with better visions, more refined design skills. Think a health web benefits from lots of choice. I am happy to share what I am writing and welcome others to use it but I also think other eyes and other takes on similar features benefits everyone.
My current setup for producing this website is as follows.
- aretext and Gnome TextEdit for writing posts
- Antenna App is my preferred feed oriented content management system
- I publish via GitHub pages
Where I would like to be at the end of 2026? I'd like to move away from GitHub and once again host my own site on a system I run. Currently looking at options from Mythic Beasts.
I run the draft version of my websites on my local home network already. This gives me a space where I can test and review things if needed. I can also update them more frequently then I currently am comfortable with GitHub.
A cool idea I would like to complete in 2026 is to make Antenna App really useful for people aside from myself. Friends and family who aren't software engineers or web developers. I want develop a recipe for turning the Antenna App running on a Raspberry Pi Zero 2W into a writer's appliance. I think it would be cool if you could carry your publication machine on your key chain. . Doing that could let me carry my writing outside my desktop and phone's quick edit. I wouldn't need to rely on the cloud to sync content. I'd just defer publishing to the next time I had Internet access. The appliance would take care of the rest.
I have hope for 2026. In spite of the challenges and hardships of 2025, I think we can build the future for the rest of us want. A future for all of us if we choose to make it happen.