The Soft Power Dilemma: Private Influence vs. Open Standards in Cryptography
Introduction: Influence Beyond the Standards Process
In the cryptography and security community, personalities can wield outsized influence on protocol design. When a respected engineer's personal opinions and projects start guiding implementers more than official standards, a tension arises between public influence and private governance. Filippo Valsorda – a prominent cryptographer, Go language contributor, and blogger – illustrates this dilemma. Valsorda's prolific commentary and self-published specifications have earned him a loyal following, to the point that his blogs and GitHub projects are treated by many engineers as de facto references. Yet these behaviors, however well-intentioned, are raising alarms among standards veterans. They argue that Valsorda's pattern of bypassing consensus venues and critiquing decisions from the sidelines is ultimately harming protocol outcomes and eroding shared standardization efforts. This isn't a personal attack on Valsorda's competence or intent – by all accounts he is highly skilled and acting in good faith – but a system-level critique of how one individual's approach can distort collaborative processes.
Private Specifications vs. Open Consensus
A core point of tension is the rise of privately governed specifications that operate outside traditional consensus-driven standards bodies. Valsorda has been at the forefront of such efforts. Notably, he helped launch the Community Cryptography Specification Project (C2SP), an initiative that explicitly rejects the usual consensus model. As the C2SP README states, "C2SP decisions are not based on consensus. Instead, each spec is developed by its maintainers... Since C2SP produces specifications, not standards, technical disagreements can be ultimately resolved by forking." 1 In other words, C2SP treats specifications like open-source software projects: small maintainer groups iterate rapidly and if there's an impasse, one can simply create a divergent version. This approach prioritizes speed and "opinionated development" over broad agreement 1.
On its face, that model has merits – it mirrors agile open-source workflows – but it flies in the face of how internet interoperability standards have traditionally been built. A concerned engineer in one discussion flatly noted that C2SP's philosophy is "the opposite of how interoperability standards… should work. We need stability and broad input, not the ability to fork when we disagree." 2 Open standards bodies like the IETF rely on painstaking consensus and public debate specifically to avoid fragmentation; the goal is to get all stakeholders on board so we don't end up with multiple incompatible protocol versions. By contrast, a "publish first, fork later" approach risks a proliferation of competing specs if maintainers don't see eye to eye. It concentrates decision power in the hands of spec authors and maintainers, rather than the broader community.
Valsorda's embrace of the C2SP model exemplifies how private governance can clash with consensus-driven processes. For instance, one of his C2SP specs defines a "static asset-based CT log" API outside the IETF. When Google's Certificate Transparency (CT) project began adopting that API, some in the community balked at the precedent: "A policy requiring 'implement RFC6962 plus this C2SP spec' sets a bad precedent… We risk creating a patchwork of requirements spread across RFCs, GitHub repos, and various specs." 2 In other words, if new CT features are codified in a personal GitHub repository rather than through the IETF's open process, the ecosystem may splinter. Different logs or vendors might implement different "patches" of C2SP specs and RFCs, making it harder to know what "the standard" even is. This patchwork concern is not just theoretical – it underscores the real cost of eschewing consensus. The efficiency of a tight-knit spec maintained by a few comes at the expense of communal buy-in and consistency.
Critiquing Standards from the Sidelines
Another hallmark of Valsorda's pattern is his tendency to publicly critique standards decisions – often harshly – without channeling that feedback into the official standards process. Over the past few years, he has published a series of blog posts and social media threads dissecting perceived missteps in protocols and registries. These posts are undeniably insightful and technically fluent, but the issue is where and how the feedback is delivered. Rather than raise issues on IETF mailing lists or contribute text to draft proposals, Valsorda frequently voices his critiques on his personal blog or Twitter/Mastodon, where they garner significant attention but can't be directly incorporated by the standards working groups he's criticizing.
For example, in "Registries Considered Harmful," a 2020 essay on his blog, Valsorda took aim at a fundamental IETF design practice: the use of registries to allow algorithm agility. Protocols like TLS, he noted, maintain long lists of cipher suite IDs, signature algorithm codes, etc., managed by IETF/IANA bureaucracy. Valsorda didn't mince words about this pattern: "I think these registries are a design smell at best, and outright harmful in most designs." 3 He argued that enumerating cryptographic options encourages a "gotta catch 'em all" mentality that leads to complexity and vulnerabilities – instead, protocols should pick one set of primitives and stick with them 3. This is a bold position that cuts against decades of IETF philosophy about crypto agility. Yet, rather than bringing this philosophy into an IETF working group on TLS or a CFRG (Crypto Forum Research Group) discussion, Valsorda published it on his own site. The essay certainly sparked discussion among engineers who read his blog, but it offered no formal path for the IETF to consider or respond to his suggestions. There was no draft standard titled "Cryptographic Agility Considered Harmful" for the community to debate; the critique lived entirely outside the tent.
This pattern repeats with Valsorda's commentary on specific IETF decisions. When a severe OpenSSL vulnerability in Punycode decoding came to light in late 2022, Valsorda again used his blog to examine "why that code was even necessary." He squarely placed the root cause on an IETF decision: "The answer is: an explicit IETF design choice that made Punycode decoding part of X.509 verification, without even a line of acknowledgement in the Security Considerations." 4 In his view, an IETF standards group had needlessly mandated a dangerous feature (internationalized email addresses in certificates without proper warning), forcing OpenSSL to implement a brittle parser and thus inviting a bug. He went so far as to call this "as much an IETF specification failure as a C language failure" 4 – effectively saying the protocol design was as much to blame as the coding mistake. Again, this diagnosis was broadcast via personal channels (his blog, social media) rather than raised within the IETF X.509 or security area while the feature was being designed. By the time the critique emerged, the relevant RFC (8398 and its amendments) was long since published. The IETF got post-facto blame, but no opportunity to adjust course during development.
From one perspective, these frank post-mortems are valuable. They force standards folks to confront hard questions and learn from mistakes. The concern, however, is that Valsorda's style of public critique without direct engagement in the process starts to delegitimize the process itself. When influential voices primarily give feedback from their personal pulpits, it paints a picture that the official venues are pointless or broken – otherwise, why avoid them? This dynamic can breed cynicism: implementers see a prominent engineer dismissing an RFC as a trashfire or a failure, and they come to distrust the standards body's competence. Meanwhile, none of the criticism is formally recorded in the standards track, so the group as a whole can't benefit from it (except indirectly if individual members happen to read his blog).
Undermining the Legitimacy of Shared Processes
Perhaps the most worrisome effect of Valsorda's approach is the subtle erosion of trust in the shared processes that produce interoperable standards. The IETF, for all its flaws, operates via open mailing lists, public archives, and consensus calls. It's not always pretty. In fact, even insiders joke about the rough edges of the discourse. Valsorda himself has frequently remarked on the unpleasantness of standards discussions. In a December 2020 AMA, he candidly described IETF mailing lists as "a bit of a trashfire" 5. On Mastodon earlier this year, he lamented the "hand-to-hand combat" nature of some technical collaboration spaces, saying "I'm a privileged white dude with 20k followers and even I hesitate to contribute to some spaces because of the mailing list hand-to-hand combat. Imagine how many contributions by talented folks we are wasting!" 6 These comments underscore a reality: the open forums of standards development can be hostile or exhausting, even for a well-known, well-respected participant.
The problem is that by voicing these frustrations mostly outside the process (to his thousands of followers on social media, rather than primarily to the mailing list itself or in the form of proposed reforms), Valsorda inadvertently delegitimizes the forums in the eyes of the community. If the IETF is portrayed as a dysfunctional "trash fire" to avoid, then naturally alternative venues – like one man's GitHub repo or blog – start to look appealing by comparison. Newer engineers who admire Valsorda may decide it's not worth engaging in IETF at all ("why bother, it's a toxic mess"), and instead follow whatever guidance comes from the influencers they trust. The result is a kind of self-fulfilling prophecy: the more veterans bypass the IETF to do work elsewhere, the less effective the IETF becomes, which then reinforces the perception that nothing good happens there.
There is also a tone factor. Using words like "sabotage" or "trashfire" to describe ongoing standards discussions (as Valsorda has on occasion on social channels) chips away at the presumption that these are legitimate, good-faith collaborative efforts. For example, calling an IETF debate or decision "sabotage" implies that the process is being wrecked from within – a grave charge that, if taken at face value, would justify abandoning the forum entirely. It's easy to see how repeated public broadsides of this nature can breed a narrative that "IETF discussions are hopelessly broken". Once that narrative sets in, implementers and companies might skip standards bodies for important work, preferring ad-hoc specs or unilateral solutions. That further undermines the shared nature of internet standards development.
None of this is to say that criticism of the IETF should be silenced or that Valsorda should suppress his opinions. The issue is how and where the criticism is directed. By not coupling his critiques with equal effort to fix things via the proper channels, the message received by the community is only the negative: "this registry approach is harmful," "this RFC decision caused a vuln," "the mailing list is a trashfire." The constructive path – working within the system to address those harms – is less visible. Thus, the shared process loses credibility without getting a chance to improve from the feedback.
Personal Specs Treated as Canonical References
One immediate social effect of this dynamic is that implementers increasingly treat Valsorda's personal publications as canonical guidance, at times more so than the actual standards. His blogs are widely read by engineers who implement crypto libraries and protocols. As a result, design philosophies he advocates can propagate quickly in new projects, even if they contradict or bypass IETF recommendations. For instance, after "Registries Considered Harmful" was published, one can observe a mindset shift in parts of the community toward preferring fixed algorithms. New encryption tools and protocols (often maintained by Valsorda or his peers) explicitly choose a single cipher or hash function with the intent to version-bump the whole protocol if it needs changing – exactly the approach he champions 3. This approach is reasonable in many contexts, but the fact that it's spreading via a blog post rather than an RFC or an academic paper means there hasn't been a rigorous, consensus-based vetting of when agility truly is or isn't needed. Implementers are effectively following one expert's opinion as law.
Similarly, Valsorda's personal specs under C2SP have started to gain adoption just through his advocacy and example code. The "age" file encryption format (which Valsorda co-created and wrote up initially outside any standards body) is now widely used as a modern alternative to PGP, with multiple implementations – despite not being an RFC or an ISO standard. Many developers have treated the age spec (a Markdown file in a GitHub repo) as if it were an established standard simply because it came from a reputable source and solved a real problem. Another example is in Certificate Transparency: his Sunlight CT log software and the accompanying static-ct API spec he wrote (again in C2SP) set a precedent that even big players like Google are evaluating. If Google and others effectively standardize CT log metadata via Valsorda's C2SP spec (instead of through the concluded IETF TRANS WG), then his spec becomes "the standard" by weight of industry adoption, not by consensus. The IETF's relevance in that area erodes accordingly.
There is a double-edged sword here. On one hand, having a talented individual rapidly produce well-engineered specifications can fill gaps that slow standards processes haven't yet addressed. Many fans of Valsorda's work would argue that he's solving real problems in a timely fashion – something especially valuable in security, where waiting years for an RFC can mean prolonged exposure. On the other hand, when the community bypasses the open standards process and effectively outsources the decision-making to a few maintainers, it concentrates a great deal of soft power in unelected, self-appointed hands. Valsorda, through the esteem he's earned, can publish a take or a spec and see it quickly treated as authoritative. That is power – the power to shape how others design systems – even without any formal mandate.
The Risk of Fragmentation and One-Person Authority
Even if that soft power is wielded competently (and there's little doubt Valsorda is technically competent and genuinely well-intentioned), it introduces risks to the broader ecosystem. One risk is ecosystem fragmentation. If one influential engineer's spec becomes canonical for his followers while others continue on a different path (perhaps the official standard or another competing spec), the community ends up divided. We could see situations where some implementations adhere to "Filippo's spec" and others adhere to "the IETF spec," and they don't interoperate. In cryptography especially, fragmentation can be dangerous – it might lead to duplicated effort, inconsistency in security guarantees, or simply confusion over which version is authoritative. The whole point of having open standards is to get everyone on the same page; the emergence of personality-driven "standards" undermines that unity.
Another risk is that concentrated influence lacks the checks and balances that a consensus group provides. No matter how smart one engineer is, more eyes and more diverse opinions often lead to better outcomes (or at least, catch mistakes early). The IETF's consensus process, for all its slowness, is meant to force consideration of varied use cases and threat models. When a personal spec is developed in a small circle, it might miss perspectives that a wider group would have offered. If the wider community simply defers to the spec as-is, those blind spots might only surface after deployment – or not at all, if the spec isn't openly critiqued the way an RFC would be. Essentially, it's a bus factor issue: a personal design reflects the thinking of a small set of people (sometimes literally one person's worldview). The more that design is treated as gospel without broader vetting, the more the ecosystem banks on that limited worldview being 100% correct. History suggests that no one, not even luminaries, get everything right.
We should also consider the scenario where multiple such influencers exist. Today the focus is on Valsorda, but he's not the only expert with a large following and strong opinions. If each charismatic cryptographer decides to publish their own protocols and registries of choice – and if each accrues their own cadre of adherents – the industry could end up with a patchwork of quasi-standards each backed by different camps. Collaboration would give way to competition between personal fiefdoms of protocol design. Again, this is the very outcome the IETF was created to avoid. It's not hard to imagine a parallel here to the browser wars or proprietary protocol eras, but on a smaller scale driven by personalities rather than companies.
Conclusion: Rebalancing Influence and Process
Filippo Valsorda's case is a cautionary tale of what happens when the informal power of influence outpaces the formal mechanisms of consensus. It shows that even well-meaning, technically sound interventions can erode the collaborative fabric if they sidestep the usual channels. The critique here is systemic: it's about reinforcing the norms that keep our shared infrastructure truly shared. Standards processes thrive on public, incorporable feedback – on people bringing their bright ideas into the forum, not blasting the forum from outside. When that breaks down, when blog posts and Twitter threads replace mailing list debates and RFC drafts, the whole community loses visibility and control over its standards. Decisions start to happen in private or semi-private, guided by whoever has the loudest voice or most followers, rather than by collective agreement.
Rebalancing this dynamic doesn't mean silencing influential voices like Valsorda's – it means encouraging those voices to engage within the processes that can temper and incorporate their ideas. Imagine the benefit if the energy behind "Registries Considered Harmful" had manifested as an IETF draft or a presentation at an IETF meeting, subject to questions and improvements from many experts. Or if the concerns about X.509 internationalization had been raised during the working-group phase, perhaps leading to a different outcome or at least a documented security consideration. By channeling influence back into consensus governance, the community can have the best of both worlds: rapid innovation and shared, vetted standards.
As it stands, Filippo Valsorda's patterns highlight a growing soft power concentration that should give us pause. His blogs and specs are insightful, but when implementers treat them as canon while dismissing standard bodies as "trashfires," we inch toward a fragmented ecosystem beholden to personalities. However competent those personalities are today, in the long run the lack of broad collaboration and buy-in will degrade the resilience and inclusivity of our infrastructure. The challenge ahead is to reinvigorate the open processes – to make them more inviting (indeed, less of a trashfire) so that influential contributors want to participate, and to remind the community that no single individual, no matter how brilliant, should unilaterally set the course of our protocols. In the dry, analytical terms standards folks appreciate: we need to reduce the single-point-of-failure risk in our decision-making. That means reaffirming the value of consensus and gently curbing the trend of protocol by blog post. Only by doing so can we ensure that today's soft power doesn't harden into a fractured future for internet security.
Sources
1. C2SP: Community Cryptography Specification Project
2. Sunlight v0.4.0 discussion — ct-policy Google Group
3. Registries Considered Harmful — Filippo Valsorda
4. Why Did the OpenSSL Punycode Vulnerability Happen — Filippo Valsorda
5. I am Filippo Valsorda, Go cryptography lead and tool author, Ask Me Anything — r/crypto