How do I export and analyze bank stock screening data?

Filter and sort banks in the screener, then export the results to a spreadsheet. From there you can build custom scoring systems, weighted rankings, peer group comparisons, and valuation calculations across all your candidates at once.

A screener is good at narrowing the universe and spotting individual standouts, but the most productive bank stock analysis happens after you move that filtered data into a spreadsheet where you control the structure, calculations, and comparisons.

Starting With the Right Screen

Before exporting anything, the screen itself needs to produce a manageable set. A results list of 20 to 50 banks is the practical sweet spot for spreadsheet analysis. If your screen returns 200 names, the filters are too loose and the export will be overwhelming to work with. Fewer than 10 results, on the other hand, suggests the filters may be too tight.

Set your filters to reflect your investing approach. A value-focused screen might filter on price-to-book (P/B) below 1.0, return on equity (ROE) above 8%, and equity-to-assets above 8%. A quality-focused screen might prioritize efficiency ratio below 55% and ROE above 12%. The criteria you choose shape everything downstream, so getting the initial screen right saves significant time later.

Once the results are filtered and sorted, the screener displays all available metrics for each bank in the results table. This full dataset is what you'll capture for offline analysis.

Building Percentile Rankings

The first productive step in a spreadsheet is ranking each bank against the rest of the screened group. For each metric column, rank every bank from best to worst and convert that rank into a percentile score.

Say your screen returned 30 banks. The bank with the highest ROE gets a percentile score of 100, the lowest gets 3.3 (1 divided by 30), and everyone else falls proportionally in between. Repeat this for each metric and you can quickly spot which banks rank well across multiple dimensions versus those that look strong on one measure but fall behind on others.

This surfaces candidates that a single-column sort misses. A bank sitting in the top quartile for ROE, efficiency ratio, and P/B simultaneously is a fundamentally different prospect than one that tops the ROE list but lands in the bottom quartile on credit quality.

Weighted Composite Scoring

Percentile rankings treat every metric equally, which rarely matches how an investor actually evaluates banks. A weighted composite score assigns importance to each metric based on your investment thesis.

A concrete example: suppose you're value-oriented and analyzing 30 screened banks across five metrics:

  • Price-to-book: 30% weight
  • ROE: 25% weight
  • Efficiency ratio: 20% weight
  • Equity-to-assets: 15% weight
  • Earnings per share (EPS) growth: 10% weight

Multiply each bank's percentile rank by the corresponding weight, then sum the results. If Bank A has percentile ranks of 85 for P/B, 70 for ROE, 60 for efficiency, 75 for equity-to-assets, and 50 for EPS growth, its composite score works out to (85 x 0.30) + (70 x 0.25) + (60 x 0.20) + (75 x 0.15) + (50 x 0.10) = 71.25.

Run this calculation for every bank and sort by composite score. The top-ranked names reflect your specific priorities, not just a single metric's ranking. A quality-focused investor would shift the weights toward efficiency and ROE, producing a different ordering entirely.

Tracking Trends Over Time

A single screening snapshot shows where a bank stands right now but says nothing about direction. After identifying candidates from the screener, pulling the same metrics from SEC filings for the prior three to five years fills in this gap.

Build a simple trend table for each top candidate with annual values for your key metrics. A bank whose ROE climbed from 8% to 11% over four years while its efficiency ratio dropped from 68% to 58% is on a clear positive trajectory. That bank might not rank first on any single current metric, but the trend makes it a stronger candidate than one with a static 11% ROE and no improvement path.

Trends also flag traps. A bank with currently solid metrics but rising non-performing loan ratios or a compressing net interest margin (NIM) may be headed for trouble that a point-in-time screen will not reveal.

Peer Group Comparisons

Screening data becomes more meaningful when organized into peer groups rather than ranked as one undifferentiated list. Community banks under $3 billion in assets operate very differently than regionals with $10 to $50 billion, so comparing their efficiency ratios or ROE figures directly can be misleading.

Pull five to eight banks of similar asset size and business model into a comparison table. Compute the peer group average for each metric, then measure how far each bank deviates from that average. A bank running 400 basis points above peer-average ROE is genuinely outperforming its closest competitors. A bank sitting 200 basis points below peer-average efficiency could signal a laggard or, depending on context, a restructuring opportunity.

Feeding Valuation Models

For investors who build valuation models, exported screening data provides the raw inputs. The Dividend Discount Model needs earnings per share, dividend payout data, and a growth rate assumption. The ROE-P/B framework uses current ROE and book value per share to estimate fair value relative to current price. The Graham Number requires EPS and book value per share (BVPS) to calculate an intrinsic value estimate.

Organizing these inputs in a spreadsheet lets you run valuation calculations across your entire screened list at once rather than one bank at a time. This batch approach often surfaces relative value opportunities that are invisible when evaluating banks individually. A bank that appears modestly undervalued on its own might stand out clearly when every peer in the group is trading at or above calculated fair value.

Common Mistakes in Post-Screening Analysis

A few recurring errors undermine otherwise solid analytical work:

  • Mixing data sources without checking definitions. If you pull some metrics from the screener and supplement with data from other sites or SEC filings, confirm the calculations match. ROE computed with end-of-period equity differs from ROE using average equity, and the gap can be large enough to reshuffle your rankings.
  • Ignoring what the numbers cannot capture. No set of ratios fully reflects management quality, regulatory risk, or loan portfolio concentration. Use quantitative screening to narrow and rank, then do qualitative research on the top candidates before acting on the results.
  • Over-weighting recent performance. A single strong quarter can make trailing twelve-month metrics look excellent while masking longer-term mediocrity. Cross-referencing with multi-year trends catches this.
  • Building overly complex scoring models. A weighted score across five or six metrics produces useful, actionable rankings. Adding fifteen metrics with conditional adjustments and interaction terms usually just introduces noise. Simpler models tend to produce more reliable results in practice.

Related Metrics

Related Valuation Methods

Related Questions

See the glossary for definitions of bank investing terms used in this article.

Screen and export bank stock data for offline analysis