There's a contradiction at the heart of this project. I'm building an app whose entire value proposition is secrecy, and I'm doing it in public.
The code is on GitHub. The threat model is published. The design decisions are documented. The encryption scheme, the key derivation parameters, the self-destruct mechanism — all of it is readable by anyone, including the adversaries I'm designing against.
This is intentional. Here's why.
Security through obscurity doesn't work
If Tarn's protection depends on an attacker not knowing how it works, it's not real protection. A forensic analyst with Cellebrite doesn't need to read the source code — they have tools that probe encrypted databases regardless of what app created them. An abusive partner who finds the app doesn't need to understand Argon2id to try PINs. A prosecutor with a warrant doesn't care about the architecture diagram.
The encryption works because the math works, not because the implementation is secret. AES-256 is publicly documented. Argon2id is publicly documented. SQLCipher is open source. I'm not inventing cryptography. I'm assembling well-tested components in a specific configuration for a specific threat model. Publishing that configuration lets people verify I did it correctly.
The people who need to trust it can't take my word for it
A journalism student covering abortion access stories wants to recommend Tarn to sources in trigger-law states. Before she'll do that, she needs to verify every security claim herself. She's not going to trust a closed-source app with a nice website.
A domestic violence survivor doesn't read source code. But the shelter's tech safety coordinator does. And that coordinator needs to evaluate whether this app is safe to recommend to the most vulnerable people they serve.
Open source is how both of them can trust the app without trusting me personally.
What building in public actually looks like
The repo has been public since day one. Not since launch — since the first commit. The threat model was the first document I wrote, before any application code existed. The security policy, the contributing guidelines, the license — all published before there was anything to secure.
As I build, I'm documenting the decisions and the trade-offs publicly. Why PIN instead of biometrics. Why zero network calls instead of optional sync. Why self-destruct is on by default. These aren't just engineering choices — they're ethical choices, and I think they deserve to be debated in the open.
I'm also going to be publishing a series called the Panel of 12 — twelve deeply researched customer profiles who reviewed the app and told me everything that's wrong with it. That includes security gaps I hadn't considered, UX failures that affect the most vulnerable users, and fundamental assumptions I'd baked into the design without realizing it. Publishing that kind of self-critique is uncomfortable. It's also necessary, because if I'm not honest about the app's weaknesses, someone else will discover them at a worse time.
The risks of building in public
There are real costs. Publishing the threat model tells adversaries exactly what I'm defending against and where I acknowledge weaknesses. Publishing the code lets someone look for implementation bugs. Publishing the roadmap tells people what's not built yet.
I've accepted these costs because the alternative — security through obscurity, trust-me-bro privacy, closed-source black boxes — is exactly the model that created the problem Tarn is trying to solve. The only reason we know about data sharing by major trackers is because the FTC investigated. I don't want my users to need the FTC to verify my claims.
What I'm asking for
If you're a security researcher: read the threat model. Read the code. Find what I missed. I have a security disclosure process and a 48-hour response commitment.
If you're a developer: the code is GPL-3.0. Contributions are welcome, especially around accessibility, security review, and internationalization.
If you're someone who works with vulnerable populations — domestic violence advocates, legal aid attorneys, reproductive health counselors — I want your input on whether this tool would be useful and what it's missing.
If you're a potential user: the app will be available soon. In the meantime, the threat model will tell you whether it's designed for your situation. Read it. Decide for yourself.
I'm building this in public because the people who need it most deserve to know exactly what they're trusting. Not a brand. Not a promise. The actual architecture, the actual trade-offs, the actual limitations.
That's the commitment.