Knotbin

Wherever you get your Podcasts

· 4 min read

Podcasts are one of the few examples of media distribution that is both meaningfully decentralized and widely used. Podcasts are decentralized in a way that AT Protocol specifically aspires to be, where the user doesn't realize it's decentralized. But podcasts go further in a way that I don't think we talk about enough. Podcasts are decentralized in a way that is not thought of at all by the user, but is implicitly understood through the language used to talk about them. Podcasts are, at their core, RSS feeds. They are really the only form of RSS feeds that actually survived the "death" of RSS as a medium of blogging and microblogging consumption. While podcasts apps like Apple Podcasts and Spotify take submissions of RSS feeds in order to be included in the app, there are public registries of all podcasts, including ones made available by Apple Podcasts and Podcast Index. This allows smaller apps that podcasters wouldn't submit their podcast directly, to still maintain a library of all the same podcasts as any major app. The true measure of the success of the decentralization of podcasts is the way people talk about them. Listen to anyone promoting a podcast and you'll hear the 5 words "wherever you get your podcasts." This is especially interesting as I would assume that the vast majority of podcast listeners use Apple Podcasts, Spotify, or maybe even YouTube Music. It would make sense, and is a common practice for other media like music, to simply list out the most popular options for listening to podcasts instead of making such a grand statement. But for other mediums like music, the statement wouldn't be true. Every platform that a song is on is a platform the artist explicitly submitted it to, whereas many small podcast platforms simply discover podcast RSS feeds without any submission from the creator. This, I'm coming to realize is the sign of a truly decentralized network, a critical mass of people whose ways of referring to the network in speech acknowledges they understand the completely interoperable nature of said network. But the reason podcasts are so interesting to me in this way is that the critical mass who refer to podcasts in a way that communicates they understand its decentralized nature have never consciously acknowledged or even realized that it is in it's nature and design, decentralized. Because the reality is that the language, "wherever you get your podcasts" doesn't actually communicate that podcasts are decentralized. It simply communicates a benefit of that decentralization. Because even if a person who listens to podcasts doesn't understand that that podcast is hosted independently, they will understand that they can switch their podcast app to one with more features and still listen to all the same podcasts. I think we, as the AT Protocol community, spend a lot of time thinking about how to communicate how AT Protocol works to users who are non-technical. This boils down, in large part, to the language we choose to use when giving this explanation. I think podcasts show us the best way to communicate to users the decentralization of a network is indirectly and often through direct interaction on the network. There are a lot of other lessons you can learn from podcasts about decentralized networks, especially about monetization, but I was just fascinated at the language used to refer to podcasts once I thought more deeply about it. A network is, in large part, peoples' perceptions about what it is, and the best kind of network, especially a decentralized one, is one where people know how to use it to its full potential without having to know how it works.

What's a computer?

· 9 min read

The iPad’s identity crisis is not new. It was long thought to be the case that Apple wanted the iPad to eventually replace the Mac. In 2018, Apple released a series of ads, including one that's been mocked to death, in which a child uses her iPad Pro and infamously responds to her neighbor asking her "whatcha doing on your computer?" with "what's a computer?" The tagline of one of those 2018 iPad Pro commercials was "Imagine what your computer could do if your computer was an iPad Pro." The tagline of the 2020 iPad Pro was "Your next computer is not a computer." Apple was pushing hard on the idea that iPads could and should take the place of computers, and it seemed they truly believed the future of computers was in the iPad, and it made sense at the time. MacBooks were awful, riddled with problems, including horrible battery life, the notoriously awful Butterfly Keyboard, and a ton of overheating issues. They were selling poorly, reviewed harshly, and clearly in decline. You didn't have to look far for reasons Apple would want to make Macs look like the past and iPads look like the future. All that changed in November of 2020 when Macs switched from Intel to Apple Silicon, making them so efficient, powerful, and cool that the MacBook Air's fan was completely unneeded and removed. These Macs received near-universal praise from reviewers and started selling like hotcakes again.  For the next few years after that, Apple backed away from the "iPads are the next computer" idea in marketing and in software. They added tiny doses of Mac-like multitasking to iPadOS with underwhelming features like Stage Manager that were received poorly and criticized as janky and limited. At the same time, Apple was putting M-series chips, the same Apple Silicon chips that were in Macs, into iPads. There was large criticism among reviewers that iPads with comparable hardware and power to Macs had nothing to use that power on because of how limited their software was. This year though, Apple finally gave into the requests for full window management, and iPad geeks, including me, celebrated that Apple was finally taking real steps to make the iPad usable as a computer. I downloaded the 1st developer beta for iPadOS 26 on my 11" iPad Pro the day of WWDC because at the time, I relied very little on my iPad for work and wanted to install the beta on one of my devices. I played around with it a little, and it was a bit janky but overall cool, and it was super nice to see that Apple was finally, seemingly, back on track to make the iPad into a real computer, and with a bit of refinement, I believed it could get there. Fast forward to this past week. I'm in a program that put me in a hacker house with 12 other teenagers in San Francisco (this program ended up being very sketchy/illegal and was shut down, but that's another story) doing a ton of work for Spark & Airport, and we're accelerating fast. I have a ton of coding to do, and I'm juggling 10 Linear issues and 20 Slack messages at once. This was when I spilled a glass of water on my Mac. In my defense, it was an incredibly crowded house. There were 12 kids, 2 bedrooms, and one bathroom. It was certainly an experience. I immediately panicked, wiped the water off, and put it lying slanted in front of the radiator for about 2 hours. The Mac was fine for about 1 and a half days, at which point the screen took its leave and stopped working. I took it to an Apple Store and gave it to them for what I was told would be a 3-5 day repair, which ended up being 8 days because of the 4th of July and the following weekend. For these 8 incredibly long days, I was left with only one device on which to code: my iPad. Anyone who's ever tried to code on an iPad knows it's a treacherous experience. But I had a keyboard case, I had updated to iPadOS 26 which supposedly made my iPad as good as a computer, and there were cloud tools like GitHub Codespaces that let me use a virtual code editor. How hard could it be? iPadOS 26 is a puzzling little thing. On its surface, it's just uncanny valley macOS, but the best way to describe it after being forced to use it exclusively for a week is that it's what someone who has never used a computer but has seen pictures and screenshots of one would think a computer is. You can do basic things. You can move windows wherever you want, resize them however you want, but a few minutes using it is enough to realize you don't actually want to. The 11" iPad is tiny. There is almost no reason you would want to not either have full-screened windows or a split screen between 2 or maybe three windows. I instantly wanted back the old split-screen option that is now exclusive to the full window management mode. But window management is not what makes a computer. In the case of the iPad, as soon as you try to code something on it, that becomes clear. The Swift Playgrounds app can build and run Swift Playgrounds apps, but every other app is limited in its own sandbox, unable to run a server locally or run any builds. Tools like GitHub Codespaces try to abstract these things away into a virtual environment in the cloud, and that works for basic tasks like web development, but when you try to build an app with any real tools, like React Native, Flutter, or even Apple's Xcode, you need to build an app locally and usually build it to a local device connected over WiFi or a cable.  None of this is possible on any iPad app because all apps are completely trapped, sandboxed into their own silos and unable to interact with the operating system. Firebase Studio is another tool that promised to allow you to run your app on a virtual simulator, but it only works in very specific environments and only has Android Simulators. It was clear to me that if the iPad wanted to be a place suitable for real workloads, including programming, it needed to be able to complete these tasks itself, not offload them to virtual environments that try to sidestep the limitations of the operating system. So why can't iPad do these things itself? All of this sandboxing is done because of the limitations put on by apps distributed through the App Store. They are done in the name of security and customer assurance, and it's true that 90% of apps won't need to interact with any systems outside of their app, but the other 10% provides a ton of value out of their apps. Why is this done? Security, of course, and this is a noble cause, but this security clearly makes the platform much more dictated by what Apple implements as a service. "If a third-party app needs to access information other than its own, it does so only by using services explicitly provided by iOS, iPadOS, and visionOS." This is exactly what stops iPad apps from taking full advantage of the OS, because the services provided are so limited.  The fact that what the iPad can be will always be dictated by what Apple chooses to allow developers to do, and not what they choose to make for the platform is what will continue to stop it from being a computer. A few months ago, I found a Mac app that added flies flying around your trash can when you hadn't emptied it in a while. If you emptied your trash can or moved your mouse over to them, they would fly away. The ability to do this was not thanks to explicit APIs Apple built into the OS to allow third-party apps to request access to the mouse position and to render custom graphics anywhere on the screen even while the app itself is closed, but rather thanks to clever engineering by an app developer on an OS that doesn't completely box its developers into using only 1st-party APIs. This app couldn't have been distributed on the App Store because it wouldn't meet App Sandboxing standards, but that's okay, because on a computer, apps can be downloaded from anywhere. Even if an app isn't approved or even notarized by Apple, it can always be opened with a few prompts into the command line. What makes a real computer in my eyes is this ability for it to be molded by any developer who wants to make things for it, without waiting for permission from 1st Party APIs. This is what a real computer is, in case that kid is wondering, and it will stop the iPad from becoming a real computer if Apple can't let go of the App Store's billions of dollars in revenue each year, which I am 99% sure will unfortunately be the case. This is why the iPad won't become a real, usable, computer that anyone can hack on anytime soon, barring an insane shakeup at Apple, and it's why window management was fundamentally mistaken as the barrier to the iPad becoming a real computer when the real barrier lies at the OS and App Store level. Until those barriers are broken, I'll happily be using my newly repaired 14" M2 MacBook Pro for any and all work-related tasks, with a new appreciation for my ability to run any software on it, and I'll keep using my iPad for what an iPad is used for: watching movies and occasional drawing or diagramming. Make no mistake, the iPad is not a computer, and it will continue not to be for a very long time.

How AI became the enemy

· 7 min read

This post was originally written September 30th, 2024. By November of 2022, AI companies had created technology more powerful than what most people at the time believed would be achieved in their lifetime. The advancements in AI were hidden, shrouded in the secrecy of research labs inside huge tech giants. The deepest opinion a member of the public could have on the technology would be mere speculation. When OpenAI launched ChatGPT in November of 2022, the curtain shielding not just OpenAI but all AI companies was pulled back. What it revealed was a half dozen companies who fully intended to continue their research out of the pressure of the public eye. Now, they had to dance. It was then that anyone and everyone could see for themselves what the technology offered and form criticism and commentary that went beyond speculation. In the days after ChatGPT launched, there was some fear and uncertainty, but the main reaction was awe. I remember using ChatGPT for the first time. The way my eyes widened as it generated multiple intelligent paragraphs within the span of a single second. It was the "oh shit" watershed moment that showed how much potential this technology had. Now the curtain has been gone for two years. None of the companies, including OpenAI, were prepared for it to come down in the way it did, but all of them have been doing their best to perform for the audience. The public has watched the show for two years now. The novelty of those first few days is gone. In its place is distrust and aversion. There is a small group that is incredibly enthusiastic and bullish on AI, holding onto that feeling they had when they first tried it, even if they don't fully understand the technology. But this group has largely been alienated from the everyday person. The rest are either incredibly skeptical or actively anti-AI. I've noticed many factions, tribes if you will, of this AI criticism. Each of them has its own points and arguments, each with its own flaws and strengths. This is the most vocal and passionate group you'll encounter. They are staunchly against generative AI and all its uses. Their main concern is the unethical acquisition of training data. Most will argue that AI is incapable of creating anything original because of how it’s trained. They also emphasize more unrelated drawbacks of AI that all other tribes point out, such as its hallucinations and obvious mistakes. This group has a lot to say that I agree with. I think people should be paid if their work is being used for AI training. However, I think that those who make the assertion that nothing AI produces is truly original should consider what definition of ‘original' they're using and if any human-made work can fit that definition. AI is much more similar to humans than we would like to admit. It is fed data much like we are fed experiences, conversations, and opinions of others. Just like us, everything it produces uses this pre-existing knowledge base, and because of the influences of it, nothing either of us produces can be truly original. This group also led the way on the aversion to AI. Much of it is reasonable. AI is being pushed on consumers incredibly hard as a revolution before they can even understand if it's helpful or not. But what I take issue with is the idea that no generative AI tool— from grammar check to image editing— can be viewed as helpful or useful in the eyes of many of these people simply because it has the label of AI. This group is much less prominent but still has the same large consensus about AI. To be clear, I'm mainly talking about journalists and reviewers who look from the outside in at this technology with just a bit more knowledge than the average non-enthusiast but not as much as an engineer would have. The main criticism you'll hear among this tribe is that AI is overhyped, stagnated, or that AI companies promised more than they could deliver, and AI can't get much better than it is now. You'll hear the common criticisms about hallucinations, but here those are talked about in a very different way, with much less hatred and disgust than the creatives, but a twinge of what feels like betrayal. The criticism I take the most issue with is that AI has stagnated, is not on schedule, or won't get better. This is because people perceived that ChatGPT was the first technology of its kind, the first model as powerful as it was at the time, when in reality, the model powering ChatGPT, GPT-3, had already existed for almost 3 years before ChatGPT. However, it appeared to the public as if GPT-4 came out a mere 5 months after GPT-3, because the public had only known about GPT-3 for that long, even though it had existed for 3 years. This is why tech enthusiasts expect GPT-5 to come out sometime this year - so much so that Sam Altman had to clarify that GPT-5 would not be released at an event in April of this year even though the previous schedule would line it up for late 2025 or early 2026. This is because they think that the AI tech had only gotten good when they heard about it. While a tiny amount of teachers want to embrace and work with AI, most see it as a non-starter. This makes me pretty sad as a student because for a moment, it seemed like teachers would work with students to integrate AI into the classroom, but now we're back to teachers vs. students, teachers fighting to stop the use of the technology and students sneakily using AI and finding ways to avoid detection. This leads to teachers making students write long essays on paper, making it harder for students to write and edit, and for teachers to grade. Where I live in New York City, the DOE made a version of ChatGPT, specifically to be used for schooling, and even that is banned at my school. I hope that eventually teachers will realize that the way forward is to embrace AI for tasks like memorization, brainstorming, and getting started on assignments, not just to write it off as a cheating mechanism. I think it's not unnatural to be whipped up into a frenzy when you see companies you have no reason to trust all pushing the same scary technology that you've seen is more flawed than they admit, but I think there is a balance to be struck. The dot-com-bubble burst not because internet companies weren't the future but because investors were just throwing money at any company with a domain no matter if it had potential or not. The AI bubble will burst not because AI doesn't have massive potential but because neither companies nor investors fully understand the best ways to utilize the potential and think they should just slap 'AI' onto anything they can think of. The distrust and hatred towards many of these companies who take work without asking, do incredibly creepy things with people's likenesses, or just making stupid scams with no use other than taking your money makes complete sense and I completely agree with the sentiment that AI is being stuffed in places where it has no business being (Excel???) and isn't yet powerful enough for the mass consumer base it's being pushed to, but falling into the trap of villainizing the technology itself, hating a product just because it uses any form of AI, or arguing that AI will never get any better just creates an unnecessary us vs them where it's completely unnecessary. It's important not to forget that original feeling you had when you first tried ChatGPT, to think about how powerful and somewhat magical it can be, while still remembering the pitfalls of the technology and how companies use it, keeping in mind that it will improve with time.