How Scoring Works

Every Secho scan produces a score from 0–100 and a letter grade. Here's exactly how each scan type calculates its result.

Scoring Model Overview

All scan types use a 0–100 point scale with letter grades. The model differs slightly per scan type based on what is being measured.

3rd Party Risk (TPRM)

Each vendor is scored across 7 categories. The overall score is a weighted average of those categories, then prohibited vendor penalties are applied on top.

Category weights:
DNS / Domain — 10%
SSL / TLS — 15%
Email Security — 15%
HTTP Security Headers — 10%
Threat Intelligence — 20%
Breach & Exposure — 20%
Vendor Compliance — 10%
score = weighted_average(categories) then apply vendor penalty caps (see below)
Prohibited vendor caps:
CRITICAL prohibited vendor → score capped at 40
HIGH prohibited vendor → score capped at 60
Caps are lifted when findings are accepted in the portal
GCP Audit / AWS Audit

Findings-based model. Score starts at 100 and each finding deducts points based on severity. Total penalty is proportionally capped to the number of checks run.

Severity deductions per finding:
CRITICAL — 20 points
HIGH — 10 points
MEDIUM — 4 points
LOW — 1 point
penalty = (CRITICAL×20) + (HIGH×10) + (MEDIUM×4) + (LOW×1) max_penalty = checks_run × 25 score = 100 - (penalty / max_penalty × 100) score = max(0, score)
Accepted risks: Accepted findings are removed from the score calculation. Prohibited vendor caps apply and lift the same way as TPRM.
GitHub Audit

Same findings-based model as cloud audits. Org-level findings (e.g. 2FA not enforced org-wide) carry more weight than per-repo findings since they affect the entire organization.

Severity deductions per finding:
CRITICAL — 20 points
HIGH — 10 points
MEDIUM — 4 points
LOW — 1 point
penalty = (CRITICAL×20) + (HIGH×10) + (MEDIUM×4) + (LOW×1) max_penalty = repos_scanned × 25 score = 100 - (penalty / max_penalty × 100) score = max(0, score)
Benchmark mappings (CIS, FedRAMP, NIST, SOC 2) are informational only and do not affect the score.
AI Audit

Identical findings-based model to cloud audits. AI-specific checks (Vertex AI exposure, training data access, service account hygiene) contribute findings with the same severity weights.

Severity deductions per finding:
CRITICAL — 20 points
HIGH — 10 points
MEDIUM — 4 points
LOW — 1 point
penalty = (CRITICAL×20) + (HIGH×10) + (MEDIUM×4) + (LOW×1) max_penalty = checks_run × 25 score = 100 - (penalty / max_penalty × 100) score = max(0, score)
NIST AI RMF, FedRAMP, and NIST 800-53 benchmark mappings are informational only — they do not affect the score.
Document Audit

File-based scoring. Score reflects the proportion of files that are clean vs. flagged, weighted by finding severity across all scanned documents.

Severity deductions per finding:
CRITICAL — 20 points
HIGH — 10 points
MEDIUM — 4 points
LOW — 1 point
penalty = (CRITICAL×20) + (HIGH×10) + (MEDIUM×4) + (LOW×1) max_penalty = files_scanned × 25 score = 100 - (penalty / max_penalty × 100) score = max(0, score)
AI-generated summary findings (INFO severity) do not affect the score or file status — only pattern-matched and AI-confirmed compliance issues count.
Grade Scale — All Scan Types

Scores map to letter grades consistently across all scan types. The portal shows both a raw score and an adjusted score.

A+  95–100 A  90–94 A-  85–89 B+  80–84 B  75–79 B-  70–74 C+  65–69 C  60–64 D  50–59 F  0–49
Adjusted score: The portal shows both a raw score and an adjusted score. The adjusted score excludes findings that have been accepted (risk-accepted) by your team — accepted findings are removed from the penalty calculation entirely.

Common Questions

How edge cases are handled across all scan types.

What if a scan finds no issues?

Score is 100/100, grade A+. This applies to all scan types — a completely clean document audit, a GCP project with zero findings, or a domain with no prohibited vendors all score A+.

Can the score go below zero?

No. The score is always clamped to a minimum of 0. Even if the penalty calculation exceeds 100 points, the score floors at 0 (grade F).

How does accepting a risk change the score?

Accepting a finding in the portal removes it from the penalty calculation. The adjusted score and grade update immediately. For TPRM/cloud scans with prohibited vendor caps, accepting the vendor finding also lifts the cap, potentially raising the score significantly.

Why does the same finding severity mean different score impacts across scan types?

The max_penalty denominator scales with the scope of the scan — a GCP project with 200 checks run has a higher denominator than one with 50, so the same number of CRITICAL findings has a proportionally smaller impact on a larger scan. This prevents small scans from being unfairly penalized for having fewer possible checks.

Do benchmark mappings affect the score?

No. Benchmark mappings (CIS, FedRAMP, NIST 800-53, NIST AI RMF, SOC 2) are informational only. They appear in the portal as a separate tab to help with compliance reporting but have no effect on the score or grade.

Does the Document Audit score change between light and deep mode?

Yes — deep mode can find additional findings (or confirm false positives), so the score may differ. Deep mode AI analysis findings are counted the same way as pattern-matched findings. INFO-severity AI summary notes are never counted toward the score.