Introduction
Pull requests are central to code review workflows, but as they grow from a few lines to thousands of files and millions of lines, maintaining a fast and responsive experience becomes a major challenge. At scale, rendering every diff line without optimization can lead to excessive memory consumption, sluggish interactions, and poor user experience. This guide provides a step-by-step approach to achieving performant diff lines, drawing on proven strategies used to improve the Files changed tab in GitHub's pull request interface. By following these steps, you will learn how to measure performance bottlenecks, optimize diff-line components, gracefully degrade with virtualization, and invest in foundational rendering improvements.

What You Need
- Knowledge of frontend performance principles (e.g., rendering, event handling, memory management)
- Familiarity with React (or a similar component-based framework) and its rendering lifecycle
- Browser profiling tools (e.g., Chrome DevTools Performance tab, Memory tab)
- Access to a large pull request (e.g., with thousands of files or millions of diff lines) for testing
- Understanding of metrics like Interaction to Next Paint (INP), JavaScript heap size, DOM node count
- Patience for iterative optimization – there is no single silver bullet
Step-by-Step Guide
Step 1: Measure and Analyze Performance Bottlenecks
Before making any changes, establish a baseline by profiling your current diff-line rendering. Use browser DevTools to record the page load and interaction on a large pull request. Pay attention to:
- JavaScript heap size – values above 500 MB indicate potential memory issues; extreme cases can exceed 1 GB.
- DOM node count – more than 200,000 nodes often causes sluggishness; 400,000+ makes interactions nearly unusable.
- Interaction to Next Paint (INP) scores – aim for under 200 ms; scores above 500 ms indicate noticeable lag.
Identify which parts of the diff view are slow. For example, is it the initial render, scrolling, or clicking on a line? Use flame charts to spot expensive React re-renders or heavy JavaScript functions. Document these findings – they will guide your optimization priorities.
Step 2: Optimize Diff-Line Components with Focused Techniques
This step improves performance for medium to large pull requests without sacrificing expected features like native find-in-page. Focus on making the primary diff experience efficient:
- Minimize redundant re-renders – Use
React.memo,useMemo, anduseCallbackto prevent unnecessary updates when props or state haven't changed. - Simplify line structure – Reduce nesting of elements. Flatten the DOM tree for each diff line, removing wrapper divs that add no value.
- Defer non-critical work – For features like syntax highlighting or diff statistics, compute them lazily or offload to a web worker.
- Optimize event handlers – Avoid inline functions; use event delegation where multiple lines share similar listeners.
These changes can reduce the JavaScript heap by 20-30% and improve INP scores by hundreds of milliseconds for large pull requests.
Step 3: Gracefully Degrade with Virtualization for the Largest Requests
When a pull request contains an extreme number of diff lines (e.g., millions), even optimized components hit a ceiling. Virtualization renders only the visible lines (plus a small buffer) and discards off-screen content. This keeps memory and DOM count under control.
- Choose a virtualization library – For React,
react-windoworreact-virtuosoare popular choices. Implement a virtualized list for the diff content. - Measure viewport size and item height – If diff lines have variable heights (e.g., due to wrapping), use dynamic measurement or a fixed average height with occasional recalculations.
- Handle search and find-in-page – Virtualization breaks native browser find because off-screen elements don't exist in the DOM. Implement a custom search that scans the full dataset (in memory) and scrolls to matched lines.
- Set a threshold – Only enable virtualization when the diff exceeds a certain count (e.g., 50,000 lines). Below that, use the optimized component from Step 2.
With virtualization, the DOM node count can stay under 5,000 even for million-line diffs, reducing memory to a few hundred MB and keeping interactions smooth.

Step 4: Invest in Foundational Components and Rendering Improvements
Improvements to core parts of your application compound across all pull request sizes. Even if a user never triggers virtualization, these changes make every review faster:
- Optimize CSS and layout – Avoid expensive styles like
box-shadowandborder-radiuson thousands of elements. Usecontain: layout style paintto limit reflows. - Reduce dependency weight – Audit third-party libraries used in diff rendering. Replace heavy ones with lighter alternatives or custom code.
- Batch DOM updates – Instead of updating state on every scroll or mouseover, batch changes and apply them in a
requestAnimationFrameor using React's concurrent mode. - Use efficient data structures – Store diff line data as arrays of plain objects, not deeply nested structures. Access properties directly rather than through getters.
These foundational improvements can reduce baseline memory by 10-15% and improve INP scores even on small pull requests.
Tips for Success
- Measure before and after each change – Keep a performance testing environment with a representative large pull request to validate improvements.
- Prioritize user experience – A feature that works perfectly but crashes on large files is worse than one that gracefully degrades. Always test with worst-case data.
- Combine strategies iteratively – Start with Step 2 optimizations for medium PRs, then add Step 3 virtualization for extreme cases. Each strategy builds on the previous.
- Document your assumptions – For example, if you set a virtualization threshold at 50,000 lines, note why. Future changes may shift that threshold.
- Monitor in production – Use real-user monitoring to track INP, memory usage, and DOM counts. Adjust thresholds based on actual usage patterns.
- Return to Step 1 if performance regressions appear – profiling should be an ongoing practice, not a one-time effort.