Every week, there’s a new thing on Twitter that’s supposed to change everything. A new AI coding agent. A new framework. A new workflow that some influencer built live on stream in 20 minutes. The replies are full of “this changes everything” and “we’re so cooked” and fire emojis.
We don’t fall for it. At Skcript, we call it “Twitter hype” internally, and we have a very specific way of dealing with it.
When something trends, we sit back. We don’t install it. We don’t try it that day. We watch. We read the source code if it’s open source. We try to understand what it’s actually doing versus what the demo video showed. Most of the time, the gap between those two things is enormous.
Take OpenClaw, for example. I tried setting it up probably 20 times. It didn’t do anything meaningful for me. Varun tried it too—found that when you want to stop it mid-execution, it keeps running in the background. You lose control. And for us, control is non-negotiable. We are, admittedly, control freaks when it comes to our production systems.
Or take multi-agent orchestration. Sounds incredible on paper—spin up a team of AI agents that plan, code, test, and deploy autonomously. We haven’t adopted it. Not because we don’t see the potential, but because we don’t trust what we can’t control. Our code cannot become a black box. If Claude is down tomorrow and we have a P-zero, we need to know exactly what every line does. That’s not negotiable.
Me and my wife, who is also my co-founder, were talking about this on a drive recently. We really think a lot of these tools are just very well-written system prompts. That’s something we can learn from—studying how they structure their prompts, how they chain tasks. But the tool itself? It’s yet another hype cycle. Like n8n before it. Like the 50 AI wrappers before that.
The tech influencers need something new every week. That’s their job. Our job is to ship FeatureOS and SupportWire—software that governments rely on, that serves 3 million people a week. Those are different jobs with different risk tolerances.
Here’s our filter. Before adopting anything, we ask three questions:
- How much did you actually spend on it—time and money?
- How many times did you interrupt it to make it do what you wanted?
- Would this work on a production codebase with real customers, or only on a weekend project where you don’t care if the data gets corrupted?
Most things fail question three.
That said, we’re not luddites. We use Claude Code every day. We moved to it from Cursor because it gave us more control. We write the core architecture ourselves—authentication, data models, fundamental systems—and let AI handle the incremental work on top. We review every PR. We have rules in our claude.md that teach the AI how our codebase works so it doesn’t hallucinate patterns that don’t exist in our system.
The way I see it, we’re probably going from horses to cars right now. Everyone is skeptical, including us. But the people who’ll win aren’t the ones who bought the first car off the lot. They’re the ones who understood the engine before they drove it.
Five years from now, this will all look different. But right now, skepticism is a feature, not a bug.