--:--:-- Maximilian Krause
deep-dive · · 11 min read

Your app's Text has a tax. Pay it once.

The small invisible cost of Text and View rendering in React Native apps, and how to stop paying it on every render.

For a decade now, every React Native app has been paying a small, almost invisible cost on every render. Multiply it by the number of <Text>s and <View>s on screen, multiply that by the frame rate you’d like to hit, and that median Android device you are targeting starts dropping them before it even gets to do actual work.

I spent the better part of last year writing a Babel plugin to remove that cost. While the plugin is interesting, it’s not the point of this post. The point is the cost itself — what it actually is, why your profiler doesn’t tell you about it, and why it’s still in React Native’s source code, despite its core maintainers having stated they want it gone.

What <Text>Hello</Text> actually does

Open up react-native@0.83.0 and find Libraries/Text/Text.js. It’s 848 lines. Eight hundred and forty-eight lines for what almost every React Native developer treats as a quasi-primitive. Ask any junior, mid-level or even most of the senior React Native engineers, and chances are, they’ll tell you this component is a direct, native primitive of the platform’s UI toolkit. However, that’s not what it actually is.

Most of those lines are gated behind feature flags or __DEV__ branches, so the steady-state cost of rendering <Text>Hello</Text> is smaller than the line count suggests. But “smaller” is not “zero.” Here is roughly what runs, in order, every time React decides to render a single piece of text:

The wrapper destructures around thirty named props out of the props object. Even when twenty-eight of them are undefined, that’s still thirty property reads. Then it transforms accessibility properties, translating aria-label to accessibilityLabel, merging aria-busy/-checked/-disabled/-expanded/-selected into an accessibilityState object (allocating that object whenever any of the five is set), reconciling disabled with accessibilityState.disabled, mapping aria-hidden to accessibilityElementsHidden plus importantForAccessibility, and renaming id to nativeID. Then it checks whether the Text is pressable, runs processColor on selectionColor (a regex plus bit math when present), clamps numberOfLines to non-negative, calls flattenStyle on the style array (walking and merging it on every render even when the style is a frozen StyleSheet.create reference), normalizes fontWeight from number to string, looks up userSelect in a map, and looks up verticalAlign in another map. Then it decides whether to wrap the whole thing in a <TextAncestorContext value={true}> Provider, telling descendants they’re inside a Text. (The <View> wrapper does the inverse: when it detects a Text ancestor, it flips the context back to false, which is how nested Texts know to render as top-level RCTText rather than RCTVirtualText.) The Provider, when needed, is itself an extra fiber that React has to mount and commit. And, finally, it subscribes to TextAncestorContext. That subscription means React adds this Text to the context’s dependency list, so any change to whether we are “inside another Text” re-renders every Text in the subtree.

For a <Text> with no fancy props, most of those checks early-return after a single comparison. The aria-translations don’t fire, processColor doesn’t fire, flattenStyle doesn’t fire. What does not early-return, regardless of what props you pass: a function-component invocation, the destructuring itself, the context subscription, and at least one extra fiber in the React tree.

A modern Pixel running Hermes spends roughly twenty-five to forty microseconds on a “trivial” Text. A trivial View costs around half that. These numbers are order-of-magnitude — they shift with prop shape, device thermals, bundle warmth — but they’re what you see in a CPU profile on a phone that costs $700.

The math that should bother you

Twenty-five microseconds is nothing.

A modest list row — name, subtitle, two badges, a timestamp, a price tag — is six or seven Texts and three or four Views. Open a real app: a banking transactions list is closer to ten Texts per row. A social feed item with quote-counts, reaction counts, author handle, timestamp, badge, body, and a footer can run fifteen. Scroll momentum keeps roughly twenty visible rows in the active commit window at any moment (if your list is properly optimized). That’s two to three hundred Texts and around a hundred Views in flight per commit during scroll. At twenty-five microseconds a Text and fifteen a View, you’ve spent six to ten milliseconds of your frame budget on JavaScript that, in most of those call sites, did not need to run at all.

You can do this math more aggressively. The benchmark in the react-native-boost example app renders 10,000 Texts and 10,000 Views in a single commit. That’s contrived. It’s also a useful upper bound: with the wrappers, the commit takes the better part of a second on an iPhone 16 Pro. Without them, it takes about half that. The gist of it is: the cost is linear in the number of host components, and the tax does not have a volume discount.

For most apps, the daily reality is somewhere between the contrived ten thousand and the well-behaved eight rows. Maybe a hundred Texts and forty Views per screen, ten screens in the average session, and the work compounding every time the user scrolls. Eight milliseconds on a modern iPhone is half a frame at sixty FPS. On a four-year-old Android device that’s already throttled because the user is on hour seven of their day, those wrapper components make the difference between a list that feels native and a list that doesn’t.

Why your profiler doesn’t tell you this

Hamel Husain has a great post titled Fuck You, Show Me The Prompt. It’s about LLM frameworks, but the diagnostic discipline ports: most of what is expensive about your software is hidden by the abstractions you built to make it less expensive to write. You cannot see the cost until you look at the actual thing.

Open a Hermes CPU profile for a real screen. You will not see “Text wrapper” sitting at the top with a damning percentage. You will see fifty-three thousand calls to a function named Text, each one taking thirty microseconds, none of them individually slow, all of them adding up. The profiler does not know which calls were “needed” and which were “tax.” It reports raw time.

So the discourse about React Native performance has been about the things that show up as a tall bar on a flamegraph. Bridge contention. JSON serialization. Yoga layout. Re-renders. We migrated FlatList to FlashList to LegendList. We adopted Hermes. We rewrote our state management. We turned on the New Architecture and watched our crash rate spike for a quarter while we sorted it out.

Don’t get me wrong. Those were the right fights. But they were also the fights with a single, clearly attributable bottleneck at the top of the profile. The wrapper tax doesn’t have a bottleneck — it has a constant factor. And constant factors are how you go from a 60 FPS app to a 45 FPS app without a single regression you can point to in a code review.

The smoking gun

I want to be very clear that this is not a novel discovery. It is a tax that the people who wrote React Native know about, have publicly committed to removing, and have since already exported an escape hatch for.

Look at Libraries/Components/View/ViewNativeComponent.js, lines 42 and 43:

// Additional note: Our long term plan is to reduce the overhead of the `<Text>`
// and `<View>` wrappers so that we no longer have any reason to export these APIs.

The same comment, verbatim, sits in Libraries/Text/TextNativeComponent.js. The “APIs” being referred to are unstable_NativeText and unstable_NativeView — two components exported from react-native whose entire purpose is to let you skip the expensive JavaScript-based wrappers. They are, at runtime, the string tokens 'RCTText' and 'RCTView'. React’s reconciler treats string types as host components: when it sees one, it skips the JS function invocation entirely and goes straight to the native side.

The unstable_ prefix is doing the work the prefix usually does — “we’re exposing this, but only because some library is going to need it, and don’t expect this to be stable for app code.” Meta exported these specifically because library authors needed a way around the wrappers.

And the fix is slow in coming. RN 0.83 ships a feature flag called reduceDefaultPropsInText that strips three redundant default-prop checks from the wrapper. It’s a real improvement, and it’s off by default. The serious work — moving aria-translation into the host’s view config, building TextAncestor handling into Fabric’s C++ tree, deleting the wrappers — is structural work that will take a long time to roll out across the ecosystem. Reasonable people at Meta, who are honestly better engineers than I am, have written // TODO comments about this and gone home. I am not blaming them.

”But the wrappers are doing real work”

The honest pushback at this point is: those wrappers exist for a reason. The accessibility transformers aren’t decorative. The flattenStyle call isn’t there to fill time. The TextAncestorContext mechanism is the only thing that makes nested Text render correctly. If you delete the wrapper, you delete the work, and now your app is broken.

This is true. It’s also a non-sequitur.

Almost everything the wrapper does is computable at build time, if you know what the call site looks like. If your <Text> has no onPress, you don’t need pressability. If it has no aria-* props, you don’t need aria translation. If its children are a string literal, you don’t need TextAncestorContext to disambiguate. If its style is a StyleSheet.create reference, you can flatten it once and cache the result by reference forever — the wrapper, charmingly, re-flattens it on every render.

Most Texts in a real app are simple. They render a string. They have a style. That is it. The wrapper is doing the right work for the small minority of Texts that need it, and exactly the same work for the large majority that don’t — because the wrapper can’t tell the difference at runtime. There is no point at which the function body can say “ah, this one’s simple, let me bail out early.” It runs the same function body for every Text, because by the time it runs, the source code is gone and only the call remains.

A compiler can tell the difference. That is its entire job: look at the call site, see what’s actually written, do only the work that’s necessary.

react-native-boost is a Babel plugin that does this. For each <Text> and <View> in your source, it decides what the wrapper would have done and either eliminates that work or moves the inescapable parts somewhere cheaper.

A plain Text — string children, literal style, no onPress, no aria — is rewritten to unstable_NativeText and the wrapper vanishes entirely. A Text that needs some of the wrapper’s work still becomes unstable_NativeText, but with two small runtime helpers spread onto it: processTextStyle does the flatten and the fontWeight/userSelect/verticalAlign normalization, and processAccessibilityProps does the aria translation. The fiber, the destructure, and the context subscription are gone in either case — and processTextStyle caches its result in a WeakMap keyed by the input style reference, so the same StyleSheet.create style is flattened exactly once for the lifetime of the program, not on every render.

The hard engineering is on the View side, where you can’t just swap in <RCTView> because a View might be nested inside a Text — and if it has Text descendants of its own, they depend on the wrapper flipping TextAncestorContext back to false. Delete the wrapper without knowing about the ancestor and the inner Texts mis-layout. So the plugin walks the JSX tree at compile time, classifying each View’s ancestor chain as safe, text, or unknown. It unwraps memo and forwardRef, recurses into local function components that render the View via props.children, follows identifier aliases, and tracks cycles with WeakSets. safe gets optimized. text and unknown get skipped — unless you opt in to optimize the latter via a flag whose name (dangerouslyOptimizeViewWithUnknownAncestors) is doing the same job the unstable_ prefix does elsewhere in React Native.

The bias is asymmetric on purpose: a missed optimization wastes performance, but a false optimization breaks layout. Every bailout — the prop blacklist, the string-children check, the unresolvable spread, the unknown ancestor — chooses the former rather than risk the latter. For the call sites the plugin can’t statically prove, the wrapper still ships and still runs. For everything else, the tax is paid once at build time, and your users don’t pay it at all. The full safety story is in the docs.

Pay it once

Most of the React Native performance discourse has been about paths. Use the New Architecture path. Use the FlashList LegendList path. Use the Hermes path. Each one is a different way through the call graph, and each one buys you a real, measurable win on the workloads it was designed for.

But the constant factor — the work that runs no matter which path you took, the destructuring and the context subscription and the redundant flatten — sits underneath all of those wins. You can adopt every shiny optimization React Native ships this decade and still pay the wrapper tax on every Text and every View on every screen. It’s leaky, it’s distributed, and it politely refuses to show up as the top frame of any single profile.

I think that build time optimizations like these are something we’ll see a lot more of in future React Native versions. The runtime is shared infrastructure. It has to be general; it has to handle every prop combination that’s allowed; it can’t say “you’re not using this, so I’ll skip it.” A compiler can. The React Compiler is living proof.

Pay the tax once.

← back to archive
x0000 · y0000