
Introduction
There is a phrase that has been repeated too often in technology lately: "we made it from scratch". It sounds epic, it inspires, it sells... but it also opens up an uncomfortable question: from scratch for real, or from scratch "with a little help" (or a lot)?
In recent days a story has gone viral that mixes everything we are living in 2026: AI agents, gigantic projects generated in a short time, and a conversation that is becoming increasingly tense within free software. It all starts with an experiment associated with Cursor: an agent running for a week which produced an experimental browser called FastRenderwith millions of lines distributed in thousands of files. The most repeated claim was that the rendering engine was "made from scratch" in Rust, and that more or less worked.
Up to that point, anyone would say: "this is a historic leap". But when devs started to take a closer look at the repo, the real controversy arose: many critical parts relied on existing dependencies (parsers, selectors, SVG, layout, JS engine). And that nuance changes everything, not because it takes away merit, but because it forces us to talk about something more important: trust.
And that same discussion connects with another phenomenon that is already affecting open source projects: the massive arrival of "apparently correct" contributions, but without context, without criteria and without follow-up, made with AI. The result: exhausted maintainers, repos that are closed to external contributions and a cultural change that nobody wanted... but that has already started.
Table of Contents
FastRender: the experiment that got everyone excited (and then sparked debate)
So that we are on the same page: FastRender is an open source Rust rendering engine project that seeks to demonstrate how far parallel agent-assisted programming can go. You can see the repository here: FastRender (official repo).
What's interesting is not just the number of lines. It's the idea: "if an AI can coordinate on huge code, what about tasks that previously seemed impossible for a single person?" In the repo it is described as an engine capable of parsing HTML/CSS, computing styles, doing layout, painting, and running JavaScript using an embedded engine.
And here appears the point that generated the controversy: the phrase "rendering engine from scratch" was popularized in networks, as if everything had been reimplemented without relying on external parts. But several community analyses pointed out that an important part of the "difficult" work was delegated in existing libraries and components.
"From scratch" vs "with dependencies": the nuance that defines the conversation.
In development, using dependencies is not a sin. In fact, it's the norm. Nobody compiles an operating system by writing from scratch the source management, the parser of everything, the image engine, the complete network stack and the JS VM.
But there is a huge difference between these two phrases:
🔶 "We built a rendering engine from scratch."
🔶 "We built an experimental browser by integrating mature libraries and generating a large layer of glue code with agents."
The first suggests reinventing the core.
The second suggests integration engineering at scale + experimentation with agents.
Both can be impressive. But only one is transparent.
And here's the risk: when the headlines exaggerate, the community does what it always does.... review the codeThe conversation moves from "how amazing" to "why did they tell it like that".
With AI this is amplified, because there is another reality: AI tends to produce a lot of "bridge code". (wrappers, adapters, integrations) that inflates metrics without necessarily equating to "difficulty solved". That doesn't mean it's useless. It means that measuring value by number of lines is less and less useful.
If there is one lesson FastRender teaches us, it is this:
In the agent era, value is not proven by volume; it is proven by clarity, reproducibility and traceability.
Side effect: "AI PRs" and maintainer fatigue
And now comes the most important (and most practical for your community).
While we discuss whether a project is "from scratch" or "from 0.5", there is something real going on in open source: maintainers receiving tons of Pull Requests made with AI.
The pattern repeats itself:
- The PR "looks right" at first glance.
- It compiles. It even passes some tests.
- But... you don't understand the context of the repo.
- Does not follow project guidelines.
- It changes things that no one asked for.
- It puts in unnecessary refactors.
- And, worse, the sender often fails to follow up.
That last point is devastating. Because keeping open source is not about "receiving code"; it's about receive shared responsibility.
A very well-publicized case was that of tldrawwhich announced a temporary policy for automatically close PRs of external contributors due to the increase of low-quality AI-generated contributions. If you want to read the author's full argument, it is here: "Stay away from my trash!" (tldraw).
It's not that they hate AI. It's that they hate the human cost of reviewing garbage. And that connects to a sentiment already seen in more projects: "if writing code is the easy thing to do, why would I want someone else to write it without understanding it?"
Besides, it's not just tldraw. It is becoming common conversation that several projects are adjusting rules: contributions with AI only if they are on accepted issues, obligation to declare AI usage, or direct closure to "drive-by" PRs. A good read to understand this climate is: We're Losing Open Contribution (Continue.dev).
The translation to the real world is tough:
AI is lowering the cost of producing code, but raising the cost of reviewing it
The problem is not the AI: it is the lack of context (and of trade).
There is a mental trap here: believing that the problem is "bad AI". No. The problem usually is:
- People using AI without understanding the repo
- Large and noisy PRs
- Unsolicited changes
- Lack of communication
- Zero ownership
Maintainers are not refusing help. They are refusing work that creates more work for them.
And this is very noticeable when you see PRs doing things like:
- "while I was at it, I changed this structure."
- "I took advantage and reorganized folders."
- "refactored the entire file"
- "updated dependencies" (for no reason)
- "I changed naming (without need).
That "over-initiative" without context is a typical odor of misdirected AI.
Best practices: how to use AI without becoming an "AI slop contributor".
If your ClickPanda community develops, contributes to open source or maintains internal repos, this will work for you now.
Rule 1: Small PR, clear purpose
If PR cannot be explained in one sentence, it is too big.
✅ Good: "Corrects null pointer in X".
❌ Wrong: "General improvements and code cleanup."
Rule 2: no collateral changes
If you are fixing a bug, fix the bug. Don't take advantage to "sort everything out".
Rule 3: explain the context as human
Includes:
- why it happens
- how you reproduced it
- what you changed
- how you validated it
Rule 4: If you used AI, say so.
Not to punish you, but to help the reviewer understand the risk: is this verified or does it just "sound good"?
Rule 5: tests > words
If your change has no test and the project allows it, you are asking for faith. And faith in open source is running out.
Rule 6: own the change (follow-up)
If the maintainer leaves you comments, reply, iterate, correct. If you're going to send "and disappear", don't send.
What does ClickPanda have to do with this? The practical part that nobody tells you
This all sounds philosophical... until you experience it in production.
Quality does not depend only on "who writes the code", it depends on:
✅ well assembled staging environments.
✅ stable IQ pipelines
✅ automated testing
✅ minimum observability
✅ and the ease of bug replication.
This is where ClickPanda can help you in a very concrete way:
- If you are just starting out or need a quick staging for your app: SSD Hosting at ClickPanda
- If you need to manage multiple sites/subdomains with ease: cPanel Hosting
- If you are already at the "serious project" level with runners, Docker, E2E testing and full control: SSD VPS
- If you still don't have your digital identity well armed: Domains at ClickPanda
The difference between a team that uses AI "well" and one that suffers is this:
The first has a system that detects errors quickly (CI + staging + tests). The second relies on "looking good".
Useful comparison (not of brands, but of approaches)
Approach | Advantage | Risk |
"AI writes everything, I just accept." | Speed | Noisy PRs, meaningless changes, technical debt |
"AI as co-pilot + small PR + tests". | Maintains quality and speed | Requires discipline |
"No AI, all manual" | Total control | You lose competitiveness in speed |
"AI to generate + strong IQ to validate". | Scale without losing confidence | You need stable infrastructure |
If you are building a product, platform or repo that you want to keep healthy while using AI, the smartest thing to do is to combine speed with a stable environment:
- Set up your staging in ClickPanda SSD Hosting and validate changes quickly.
- If you need real control for pipelines, containers and tests: go up to ClickPanda SSD VPS.
- And if you still don't have your domain ready: get started today at ClickPanda Domains.
Conclusion
FastRender and all this controversy serve to ground a truth: AI can now produce software on a scale that was previously unthinkable. But that doesn't take away the most valuable thing about development: judgment, context and responsibility.
When someone says "from scratch" and is not accurate, trust suffers. And when thousands of PRs arrive "well formatted but empty of understanding", open source protects itself by closing doors.
Your advantage is not in "using AI" (everyone else will do that). Your advantage is in use AI with tradewith clear rules, small PR, testing, tracking and fast validation environments.
If you do that, the AI doesn't fill your repo with noise: it multiplies you. And then it really, really changes the game.