Secondata searches publicly available sources in real time and scores each one against category-specific quality criteria. Here is how that works.
Each search covers a broad set of source types retrieved fresh in real time — results are not cached or reused between searches.
Because sources are retrieved in real time, each report reflects what is publicly available at the moment of the search, not a fixed snapshot from a prior index.
Academic sources are verified against CrossRef and Semantic Scholar APIs to confirm:
Industry and practitioner sources are not verifiable via academic APIs. They are evaluated by the AI based on publisher credibility, methodology transparency, and recency. Verified and unverified status is shown explicitly for each source in the report.
Each source receives a quality score from 0–100 based on category-specific criteria:
Evidence certainty (Strong / Moderate / Limited) reflects the collective strength and consistency of the evidence base — not any single source. It accounts for methodological quality across all retrieved sources, degree of agreement or contradiction between findings, and how well the evidence covers the specific question asked.
Decision confidence (High / Medium / Low) translates evidence certainty into decision relevance. It accounts for question specificity, sector evidence availability, and how identified gaps may materially affect the specific decision being evaluated.
Questions about methodology? Contact us at hello@secondata.ai