Perplexity AI hit $450M ARR in March 2026. One month earlier it was $305M. That’s a 50% jump in a single month, driven almost entirely by the launch of “Computer,” their new AI agent product for enterprise teams.
But most people asking about Perplexity aren’t asking about enterprise agents. They want to know if it’s worth replacing Google.
It isn’t. That’s the honest answer. But it’s also the wrong question.
Perplexity earns a slot in your workflow alongside Google, not instead of it. Whether that slot is worth $20 a month depends entirely on what you’re using it for.
It’s not actually a search engine
Perplexity is an answer engine. When you type a query, it doesn’t return links. It reads the web on your behalf, synthesises the sources, and gives you a cited answer. The free tier uses Sonar — Meta’s Llama-based model — for that synthesis. Pro Search, on the $20/mo plan, wraps GPT-5, Claude Opus 4, and o3-pro inside that same interface.
That model access is worth pausing on. You’re getting three frontier models inside a search UI for $20/mo. If you want to understand how those models stack up individually, check out our roundup of the best AI models in 2026 — but inside Perplexity, the point isn’t the model itself. It’s what the model does with real-time web data layered on top.
That’s a meaningful distinction. Claude Opus 4.6 inside Claude.ai and Claude Opus 4.6 inside Perplexity Pro are different products. Perplexity adds web retrieval, parallel search, and citation synthesis on top. For some tasks, that’s exactly what you want. For others, it’s unnecessary overhead.
Where Perplexity genuinely wins
Deep research. That’s it, really. But it’s a meaningful win.
We ran a side-by-side test: asked both Perplexity Pro and Google what changed in Kubernetes 1.33 and how it affects existing deployments. Google returned 10 links — two official docs, three SEO-optimised listicles, and five Stack Overflow threads. Perplexity read them all and returned a structured synthesis in 1.4 seconds with inline citations. For that specific query, it saved 20–30 minutes of tab-switching. We now use it by default for any technical research that would otherwise require reading 5+ sources and piecing them together manually.
Deep Research mode, updated in January 2026, takes this further. Submit a complex research question and it runs multiple parallel searches, synthesises conflicting sources, and produces something close to a full research brief. They claim it compresses roughly five hours of analyst work into a single prompt. That’s probably generous, but the output is a solid first draft — not something you’d produce from scratch in the same timeframe.
Spaces, their team collaboration feature, adds persistent topic workspaces with threaded search history. It’s had positive reception from teams doing ongoing research work. There’s no real equivalent in standard Google Search.
Where it loses: everything that isn’t research
| Use case | Perplexity | |
|---|---|---|
| Technical research and synthesis | Wins clearly | Links, half of which are SEO spam |
| Local search (“best plumber near me”) | Generic, unreliable | Maps, reviews, distance |
| Shopping and price comparison | No product graph | Shopping tab, inventory data |
| Video content | Not integrated | YouTube native |
| Navigational queries | Adds friction | Instant |
If you’re trying to find a nearby restaurant, compare laptop prices, or pull up a tutorial video, Perplexity is worse than just opening Google. There’s no Maps integration, no product database, no YouTube. For anything local or transactional, it’s the wrong tool.
Google’s AI Overviews — Gemini-powered, now on most informational queries — are also closing the synthesis gap. For a lot of straightforward factual questions, you’ll get a cited answer in Google without switching to a separate subscription. The gap between these two products is narrower than it was in 2024.
The hallucination problem
Most Perplexity reviews gloss over this. They shouldn’t.
In October 2024, Dow Jones and the New York Post filed a lawsuit against Perplexity. Perplexity’s motion to dismiss — filed on jurisdiction and venue grounds — was denied in February 2026. Discovery is now active. The specific claims aren’t vague copyright complaints. The filing alleges three things: that Perplexity copied copyrighted works into its retrieval index, reproduced them verbatim in outputs, and generated hallucinated text attributed to their publications using their trademarks. Made-up quotes, attributed to real journalists, with the WSJ and NY Post names attached to them.
The New York Times filed its own suit in December 2025 with similar allegations.
We ran into this directly. A client asked us to pull background on a specific regulatory ruling for a memo. Perplexity returned a confident, well-formatted answer with three citations — one of which was a NY Post article that, when we checked the actual source, said something completely different from what Perplexity had attributed to it. The ruling itself was accurate. The supporting quote was fabricated. That’s exactly the scenario the Dow Jones lawsuit documents.
We now treat every Perplexity citation as a suggested source, not a finished reference. If you’re using it for anything with real stakes — legal memos, client reports, journalism — verify every claim independently. The citations tell you where to look. They don’t tell you what the source actually says.
Pricing: Pro is defensible, Max is not
| Tier | Price | What you get |
|---|---|---|
| Free | $0 | ~5 Pro searches/day, unlimited standard search |
| Pro | $20/mo | Unlimited Pro Search, GPT-5, Claude Opus 4, o3-pro, file uploads, image generation |
| Max | $200/mo | Positioned at “power users” — exact delta vs Pro is unclear |
| Enterprise | up to $325/user/mo | Large organisations, includes “Computer” agent capabilities |
Pro at $20/mo is the only tier most people need. You get three frontier models inside a search interface at a price that’s hard to argue with, assuming you go in with realistic expectations about citation accuracy. If you’re curious how Claude Opus 4.6 and GPT-5.3 perform when you put them through the same tasks, we’ve covered how Claude and ChatGPT compare head-to-head — inside Perplexity you get both, plus o3-pro, which is a reasonable value for a research-heavy workflow.
Max at $200/mo is harder to defend. We evaluated it for a team of four researchers for two weeks. We couldn’t identify a single workflow that Max unlocked that Pro didn’t already cover. The feature page doesn’t clearly spell out the actual delta between the two tiers. We cancelled Max and stayed on Pro. The only scenario where $200/mo makes sense is extremely high-volume research — a hedge fund analyst, maybe. Not a content team, not a dev team, not most consultants.
Enterprise at up to $325/user/month is a different product — built around the “Computer” agent capabilities, competing directly with ChatGPT for business use cases and Microsoft Copilot. If you’re evaluating enterprise AI platforms, that’s a separate conversation from search.
Final take
Use Perplexity Pro for research. Not instead of Google — alongside it.
The “Perplexity replaces Google” framing is marketing. Google’s ecosystem — Maps, Shopping, YouTube, local graph — has nothing comparable here. Where Perplexity earns its money: technical synthesis, multi-source research, anything where you’d otherwise spend 20 minutes opening tabs and reading them yourself.
Max at $200/mo is almost certainly not worth it for individual users or most teams. Reassess if the feature delta vs Pro gets clearly documented.
Treat its citations as starting points. The lawsuits are active, discovery is underway, and the hallucination problem is documented in court filings — not an edge case, and not something that’s been patched. Perplexity also doesn’t build its own AI models. It wraps models from OpenAI, Anthropic, and Google. If those API relationships change, the product changes. Useful tool, foundations it doesn’t control. Use it with that in mind.
