MytheAi

Editorial Methodology

How we rank AI tools

The exact framework MytheAi uses to score 549 AI and SaaS tools. No black box, no commission-driven ranking, no sponsored placements in editorial lists. Read the criteria, audit any score, challenge any ranking.

Why methodology matters

Most AI directory sites do not publish how they rank tools because there is no real ranking - listings are sorted by affiliate commission rate, sponsored slot bids, or whoever paid for placement that quarter. The user gets a sorted list that has nothing to do with which tool is actually best for them.

MytheAi takes the opposite approach. Every tool, comparison, and Top 10 list is scored against a public seven-criteria framework. This page documents that framework completely so you can decide for yourself whether our rankings match your priorities, and so you can challenge any score you think is wrong.

The seven scoring criteria

We score each tool on a 1-5 scale across seven criteria. The composite score determines the editorial rating shown on tool pages and the rank order in Top 10 lists.

CriterionWhat it measures
Output quality25%On real tasks at default settings, not benchmarks. The single most heavily weighted criterion because it is what the user actually feels day to day.
Ease of use15%Time from signup to first useful output. Onboarding clarity. UI patterns that match user expectations versus tools that require relearning.
Pricing value15%Price for typical user volume, not list price. Free tier generosity. Hidden costs (per-seat, per-execution, overage). Cancellation friction.
Feature depth15%Coverage of common workflows in the category. Power features for advanced users. Quality of API or extensibility for team integration.
Integrations10%Native integrations with the dominant tools in adjacent categories. Webhook and API quality. Zapier or workflow automation support.
Reliability10%Uptime over the past year (where measurable). Error handling. Customer support response times. Data export options.
Trajectory10%Release velocity. Quality of recent releases. Funding and team stability signals. Whether the tool is improving or stagnating.

Composite score = weighted average of the seven criteria. A tool needs at least 4.0/5 to enter a Top 10 list.

How we test tools

For tools we can install or sign up for directly (most SaaS), we run a minimum 8-hour evaluation across two work sessions. We use the tool on real tasks from our backlog - drafting blog posts, comparing legal documents, building UI prototypes, summarizing meeting recordings - rather than synthetic prompts.

For enterprise tools that require a sales call to access (some compliance, healthcare, and CLM platforms), we rely on documented features, third-party reviews from G2 and Capterra, customer case studies, and industry analyst reports. These tools are flagged with a "limited hands-on" note where applicable.

For tools that change pricing or features frequently, we re-verify on a 90-day cadence and update the "Last verified" date on the tool page. Pricing is independently verified - we do not trust self-reported pricing on tool websites because it changes faster than vendors update copy.

How evidence is weighted

Every score has supporting evidence visible on the tool page. We tier evidence so readers can judge the strength of each claim. Tier 1 is hands-on testing on real tasks - the highest weight in our scoring. Tier 2 is third-party aggregate signal (G2, Capterra, Product Hunt scores from at least 50 verified users). Tier 3 is community signal (Reddit threads, Hacker News discussion, GitHub stars) which we treat as directional rather than conclusive.

When the three tiers agree, the score is high-confidence. When they disagree - say, hands-on testing exposes a flaw that aggregate scores miss because reviewers had not yet hit it - we lean on Tier 1 and note the disagreement on the page. Sources are listed at the bottom of each tool page and linked back to their original location. We never paraphrase third-party reviews as our own.

How rankings are protected from affiliate influence

Affiliate commission rates are not visible to anyone making editorial decisions. The team member who maintains affiliate relationships is separate from the team member writing reviews. This is the simplest structural protection: if scoring does not see commission data, scoring cannot be biased by it.

Specifically: a tool that pays MytheAi a 30% recurring commission can rank below a tool that pays 0% if the lower-commission tool scores higher on the seven criteria. If you find a ranking that looks suspicious - say, a tool that is widely panned ranked above one with strong reviews - email us at info@mytheai.com and we will publish the score breakdown.

How we choose which tools to include

We do not list every AI tool that exists. The directory is curated, currently 549 tools selected for one of three reasons: significant user adoption (10K+ verified users or recognized brand presence), notable category innovation (introduces a workflow or capability others do not have), or fills a real demand gap that readers ask us about.

Tools are removed from the directory when they shut down, get acquired and folded into another product, or fail our automated dead-tool scan (DNS failure, certificate error, 404 on homepage, parking domain). We run this scan monthly. Last cleanup removed 11 dead tools in Session 67.

How comparisons are scored

Side-by-side comparisons (the /compare/ pages) use the same seven criteria, applied to both tools simultaneously. For each criterion, we score both tools independently and write a short note explaining the gap. This produces seven scored rows per comparison plus an overall summary and a "winner" call.

A few comparisons have no winner because the answer genuinely depends on the use case. For these we provide a decision matrix - "Pick A if your priority is X, pick B if your priority is Y" - rather than forcing a single recommendation. About 15% of our comparisons end this way.

What we update and how often

Tool pages are reviewed every 90 days minimum. Pricing changes are caught faster - usually within 7 days - because we monitor pricing pages of major tools weekly. The "Last verified" date on every tool page is the source of truth.

Top 10 lists are reviewed every 60 days. When a new tool from our weekly batch additions reaches Top 10 quality, it can displace a lower-ranked tool. The list is dated and the change is noted.

Comparisons are updated whenever either tool in the pair has a significant feature or pricing change. We do not update for cosmetic site refreshes. Updates are dated.

Mistakes and corrections

We get things wrong. When we do, we correct fast and visibly. If you spot an error - factual, pricing, ranking, or otherwise - email info@mytheai.com with the URL and the issue. We aim to respond and correct within 48 hours for factual errors and within 7 days for ranking disputes that require re-scoring.

Corrections are noted at the top of the affected page. We do not silently edit history.

Curated by

John Ethan

Founder & Editor-in-Chief

Founder of MytheAi. Tracking and reviewing AI and SaaS tools since January 2026. Built MytheAi out of frustration with pay-to-rank listicles and SEO-driven AI directories that prioritize ad revenue over honest guidance. Hands-on testing across 500+ tools to date.

·How we rank tools