Is This Supplement Legit

Methodology

How supplement analysis works

Last updated: 2026年4月5日

Overview

Every published ingredient hub is evaluated against a fixed editorial framework. The same categories are applied each time: ingredient identity and evidence fit, dosage plausibility, claim realism, and (where noted) category value context. Scores compress those inputs into comparable numbers - they do not replace clinical judgment.

Decision engine, not sponsored reviews

We do not write mood reviews. We rank catalog SKUs with a fixed rubric: structured label reads, transparency signals, conservative safety checks, explicit value math where prices exist, and formula heuristics tied to published ingredient context. Compare pages, alternatives, and best lists exist so tradeoffs stay visible - commerce links never move scores.

How we rank products

Ingredient evaluation

Analysis starts from the declared active ingredient (or compound class the hub covers), not from brand storytelling. Published human trials and high-quality reviews weigh more than mechanistic-only work, surrogate-only endpoints, or marketing collateral.

Typical editorial classifications include:

  • Supported - Reasonable alignment between common use claims and human outcome data, given population and dose caveats.
  • Under-supported - Mechanistic or preliminary signal without proportional human confirmation for the claims in circulation.
  • Unsupported - Loud consumer claims with thin or absent human evidence at the stated level of certainty.

Dosage analysis

Label and common supplemental doses are compared to ranges that appear repeatedly in trial summaries and reference discussions. Material under-dosing relative to studied ranges is flagged because it breaks the link between marketing promises and what studies actually tested.

Dosage notes are informational; they do not prescribe a personal dose. Renal, hepatic, pediatric, and polypharmacy contexts require professional review.

Claim validation

Public-facing claims - storefront, influencer, and aggregate social narratives - are checked against what human trials actually measured. Unrealistic certainty, disease-treatment framing where evidence is preventive or associative only, and extrapolation from unrelated populations are surfaced.

The published hype score quantifies how far popular narratives run ahead of trial support. Higher hype means more disconnect, not "bad ingredient" by itself.

Price vs value

Where relevant, category and retail context may be used to separate ingredient merit from shelf tactics (e.g., proprietary blends, dose obscuring, premium positioning without added evidence). This layer does not determine the evidence or safety scores on its own.

Outbound retailer links, including any affiliate relationships, do not influence numerical scores or verdict labels.

Final score logic

Published outputs combine: ingredient + claim evidence, dosage plausibility (inside the evidence narrative), safety headroom, and hype gap. An overall score (0-100) aggregates those dimensions with a fixed internal rubric per hub. It is a sorting and orientation tool, not a personal recommendation.

Evidence score - Strength and relevance of human data for the outcomes readers care about; downgraded for small trials, heterogeneity, and over-reliance on non-independent sponsorship without replication.

Safety score - Tolerability, documented adverse patterns, vulnerable groups, and interaction risk in the public record. Higher means fewer red flags in the editorial read - not a guarantee for any individual.

Hype score - Magnitude of marketing-versus-trial disconnect; use alongside evidence, not alone.

Verdict labels

Labels anchor the overall band and explicit safety gates. They are not regulatory certifications or individualized risk assessments.

  • Strong supportHuman trials and reviews generally support common, reasonable uses in the population studied; marketing is usually less ahead of the literature.
  • PromisingReal signal exists but is uneven - smaller trials, narrower populations, or more industry involvement than preferred for a top band.
  • Mixed evidenceStudies conflict, endpoints differ, or replication is thin; reasonable readers can disagree on emphasis.
  • Weak evidencePublished human data are thin for the loudest claims; market enthusiasm often exceeds trial support.
  • Insufficient evidenceNot enough quality human research for confident conclusions; default stance is conservative.
  • CautionSafety, interactions, vulnerable groups, or misuse patterns dominate; benefit discussion stays secondary until risk context is clear.

Informational only

This methodology describes an editorial system. It does not diagnose, treat, or prevent disease. Decisions about supplements belong with a qualified professional who knows the full clinical picture.

What the platform does not do

  • Rank branded SKUs as "best in class" paid placements.
  • Promise outcomes for any individual reader.
  • Substitute for clinician judgment in pregnancy, nursing, pediatrics, or complex care.