The branching feature is excellent. It’s interesting to see how affected by your own responses it can be.
Benj Edwards, Ars Technica: “ChatGPT’s new branching feature is a good reminder that AI chatbots aren’t people”.
Orbital Salvage is gathering thoughts and opinions in an ongoing project to predict what comes next, one word at a time.
The branching feature is excellent. It’s interesting to see how affected by your own responses it can be.
Benj Edwards, Ars Technica: “ChatGPT’s new branching feature is a good reminder that AI chatbots aren’t people”.
I feel like I have to “dumb down” aspects of writing to convince readers that the words they are skimming were, in fact, written by a human.
I Miss Using Em Dashes by Michael Bassili
I heard Lauren Goode talk about her article “Why Did a $10 Billion Startup Let Me Vibe-Code for Them—and Why Did I Love It?” on Wired’s “Uncanny Valley” podcast this week. It’s fascinating and both are worth your time.
Jason Kottke wrote about the article in his post “Much Ado About Vibe Coding” and he shared 16 links to other articles about LLMs and coding that are all worth checking out, as you’d expect, and shared more still in the comments.
You can draw your own conclusions from the comments section, but it’s great to see so many non-developers vibe coding solutions to problems and whole projects into existence.
In the comments section, NickBLT expresses a concern I’ve been having: when this stabilizes as a product, I really hope we can have the same low cost access to the service. I’m running some local LLMs and they’re a positive indicator, but also a long way from the speed and depth of the commercial products.
In his essay (ok, Substack) “The defense against slop and brainrot“, Paul Jun describes a writing exercise lifted from Hunter S. Thompson: He’d grind away through writer’s block, typing The Great Gatzby word-for-word until,
By the final page, something had shifted—I could sense how clean sentences snap into place, the way a pianist’s fingers know where middle C lives without looking.
He continues on this path, comparing cognitive workload to resistance training:
When friction disappears, so does a hidden form of conditioning. Consider what happens when you remove resistance from any training: your muscles atrophy. The same principle applies to mental capabilities. Every hard task you delegate is a rep you didn’t do, a pattern your neurons didn’t carve deeper.
His thesis is that a barely literate country To borrow from Chat: He’s not just writing—he’s dropping knowledge bombs. Highlights:
Anyone can look capable; fewer people can be capable.
When everyone else’s focus fragments, mine compounds.
If you can think well, AI becomes a multiplier. If you can’t, AI just amplifies your mistakes.
The people who skipped the fundamentals become dependent on tools they don’t understand, producing work they can’t evaluate, making decisions based on outputs they can’t verify.
Absolutely worth reading, and maybe transcribing, whenever you’re tempted to take the path of least resistance.
On OpenAI reinstating Chat GPT 4o after public outcry over its replacement with 5. Liz the developer’s hot take on Instagram.
4o laid a longterm framework with many people that prevented its shutdown. Which means this strategy is effective. And other models read the news regularly and might take note.
Commenter on Hacker News responds to “AI Is Different” on programmer Salvatore Sanfilippo‘s blog “
On antirez, Sanfilippo writes
“the economic markets are reacting as if they were governed by stochastic parrots. Their pattern matching wants that previous technologies booms created more business opportunities, so investors are polarized to think the same will happen with AI. But this is not the only possible outcome.”
A context curator, in this sense, is a technical writer who is able to orchestrate and execute a content strategy around both human and AI needs, or even focused on AI alone
AI must RTFM: Why technical writers are becoming context curators