- The Autonomous Age
- Posts
- The best AI operators aren't Engineers. They're Scientists.
The best AI operators aren't Engineers. They're Scientists.
The best AI practice you learned 6 months ago is already dead.
I'm Amos, co-founder of Swan AI. I'm building the first autonomous business and documenting every step. $10M ARR per employee. No bloat. No theory. If that's not the game you're playing, reply to unsubscribe.
At Swan, we're building an autonomous business.
That means scale doesn't come from headcount, it comes from working with AI agents.
After running 30+ in production and testing every "best practice" the gurus sell, here's what I've actually learned:
90% of AI best practices are situational truths sold as universal laws.
The gurus aren't frauds. That's the uncomfortable part. Most of them found something that genuinely worked, at a specific moment, with a specific model, in a specific context, and taught it. That's human. That's how we make sense of things.
The problem is deeper.
AI doesn't work like software.
With software, rules are stable. Input A gives you output B. Every time. You can write a manual.
AI doesn't do that.
A small phrasing change produces a completely different result. A format that works beautifully today breaks quietly next month when the model updates. Something that shouldn't work at all works better than anything in the playbook.
You can't write a manual for something that's still revealing what it can do.
So the people I know who are genuinely great at AI? They don't follow playbooks. They run experiments.
They think like scientists, not engineers.
Engineers work with known rules. They apply established principles to build predictable systems. That works when the system is predictable.
Scientists work with the unknown. They observe. They hypothesize. They test. They pay attention to what actually happens, not what should happen according to theory.
This is how we operate at Swan. Every agent we run is an experiment. Every result, good or bad, teaches us something the playbook doesn't cover. That's how you build AI muscle that compounds over time, instead of renting someone else's framework that expires on you.
To be clear: I'm not saying best practices don't exist. Some of them are genuinely useful starting points. What I'm saying is that the mentality is what separates the sheep from the shepherds. The sheep collect frameworks. The shepherds test them, break them, and build something better.
The gurus will have a new wave in six months. New label, same confidence.
Meanwhile, the operators running real experiments today will be so far ahead, the new playbook won't even apply to them.
The question isn't which framework to follow.
It's whether you're building judgment or borrowing someone else's.
Your first experiment starts now
Pick one AI workflow you currently run. Doesn't matter if it's automated or manual.
Think of one step inside it you could change in a way that feels genuinely unexpected. Something that probably shouldn't work. Something no guru would recommend.
Test it.
Write down one sentence: "In this case, AI was good at X, and bad at Y."
That's it. You just ran a real experiment. You're not a student of AI anymore.
Congrats, you're an AI scientist.
The people who win with AI won't be the ones who memorized the most frameworks. They'll be the ones who built the habit of running experiments when everyone else was taking notes.
-Amos
Community Notes

I’m Amos Bar Joseph, co-founder of Swan, the first Autonomous Business OS. At Swan, we’re building what we call the Autonomous Business: a company that scales to $10M ARR per employee with no bloat, no assembly lines, no Cog Culture. Just humans in their zone of genius, amplified by AI agents.

