🌁#87: Why DeepResearch Should Be Your New Hire

Community Article Published February 10, 2025

– this new agent from OpenAI is mind blowing and – I can't believe I say that – worth $200/month

--

This Week in Turing Post:

  • Wednesday, AI 101, Technique: What are Chain-of-Agents and Chain-of-RAG
  • Friday, Agentic Workflow: we explore Reasoning

🔳 Turing Post is on 🤗 Hugging Face as a resident -> click to follow!


The main topic – Deep Research from OpenAI makes me rethink my work routine

Turing Post is a lean operation, run full-time by just two people. While I work with a few trusted contributors, the heavy lifting is done by Alyona and me. I wasn’t actively looking to add someone new to the team – yet here we are, and I couldn’t be happier about it.

Meet our newest hire: DeepResearch – $200/month.

Despite the ongoing controversies surrounding OpenAI, their latest release, DeepResearch (seriously, you guys need to up your naming game), is a game-changer. It’s not replacing anyone at Turing Post, but it has significantly cut down the hours we spend on research-heavy tasks. What used to take ages now gets done in a fraction of the time. To the point that I'm now rethinking my well-established workflow.

What do people say on the web?

image/png

I didn’t have to give up a Perplexity subscription – I never had one. Instead, I’ve been using a combination of cross-prompting Gemini Deep Research and ChatGPT o1 or o3-mini, but DeepResearch might simplify that routine. It shifts the workflow from active searching to supervising an AI-generated research process. It’s a different level of actual help—like having a virtual research assistant to whom you give a prompt, step away while it works, and return to a finished analysis.

image/png

My summary is that DeepResearch is amazing as a well-organized starting point. What I also liked is that it feels like a promise of a working agent. DeepResearch will ask clarifying questions if your prompt is ambiguous, then proceed step-by-step. The result is a more robust and context-aware research process compared to a single-turn question-answer system. Or – if you don’t know the answers to its questions—just say, “Do as you see fit, knowing what I’m working on.” It does a pretty good job of figuring things out on its own. Pretty dope.

It also understands time frames: if you need fresh materials from February 3 to February 10, it will search specifically within that range.

Andrew Maynard, a professor and author, wrote that after using DeepResearch, he’s “*beginning to wonder when research and scholarship that isn’t augmented by AI will be seen as an anachronism.” “Using it feels like giving a team of some of the best minds around PhD-level questions, and having them come back with PhD-level responses – all within a few hours.*“ (from Does OpenAI's Deep Research signal the end of human-only scholarship?

This means DeepResearch can identify cross-domain links or examples that might otherwise be overlooked, offering fresh perspectives. In professional settings, this can support more well-rounded decision-making – for example, a product manager can quickly gather insights from scientific research, market data, and consumer opinions in one place, rather than relying on multiple teams or lengthy research processes. It makes you multifaceted!

image/png

Mollick was impressed by the depth, but he and others like economist Kevin Bryan pointed out the current limitations in data access – notably that having access to better search and paywalled content would make such agents far more useful

How does DeepResearch work?

OpenAI’s breakthrough with DeepResearch lies in its ability to take coherent actions throughout its reasoning process. Unlike traditional AI agents that struggle with long-term focus, this model maintains progress without getting distracted. Unlike Gemini’s approach, which searches for sources first and then compiles a report, OpenAI’s version dynamically searches and takes actions as needed. This makes it more adaptable and efficient. Under the hood, it’s powered by the o3 model with reinforcement learning, allowing it to act within its reasoning process. The results depend on the chosen research target.

Here is a lengthy research report produced by DeepResearch following the prompt: ‘I need technical details about how DeepResearch from OpenAI works. Give me model architecture, system architecture, and deeper insights into proprietary aspects’.

Not without limitations

  • Occasional Inaccuracies and Hallucinations – as every LLMs, it can misstate facts, confuse similar terms, or generate incorrect information. ALWAYS verify.
  • Difficulty Assessing Source Credibility – Doesn’t always distinguish authoritative sources from unreliable ones, sometimes including outdated or low-quality information.
  • Outdated or Stale Information – May cite old data, especially in fast-changing fields, unless explicitly prompted for the latest updates.
  • Inconsistent Instruction Adherence – Sometimes includes topics it was told to exclude or doesn’t fully follow user guidance.
  • Potentially Incomplete in Niche Depth – Might miss important details or references that an expert would consider essential.
  • Overwhelming Length and Irrelevant Details – Tends to provide exhaustive reports, sometimes including excessive or tangential information.
  • High Cost and Limited Access – Available only to ChatGPT Pro users at $200/month, making it inaccessible to many casual users.
  • Opaque “Black Box” Reasoning – Users don’t see how it selects or evaluates sources, making its conclusions harder to fully trust without verification.

But you know, this is the worst you’ve ever experienced it.

Best Practices for Using DeepResearch Efficiently

  • Craft a Detailed, Focused Prompt – Be clear and specific in your query to avoid irrelevant results. Use ChatGPT to refine your prompt before submitting it.
  • Provide Context or Examples – Giving background information or specifying the desired answer format helps guide the AI’s research. A lot. Here is an example from Ben Thompson, author of The Stratechery.

image/png

  • Engage with Clarification Questions – Answer any follow-up questions from DeepResearch to fine-tune its direction before it starts searching. Its questions are helpful on their own, prompting you to think things through and clarify what you really want.
  • Specify Scope and Bias Preferences – Direct the AI on preferred sources, date ranges, or perspectives (e.g., “focus on peer-reviewed studies” or “exclude politically biased sources”).
  • Verify and Refine the Output – Treat the AI's report as a first draft, fact-check key claims, and run follow-up queries to clarify or correct missing details.
  • Request Summaries or Actionable Insights – After a long report, ask for a concise summary, key takeaways, or recommendations to make the information more digestible.
  • Manage Time – Plan around DeepResearch’s processing time (5–30 minutes) and work on other tasks while its “thinking”.
  • Maintain “job description” for your new employee – Create a list of tasks that DeepResearch can assist with or automate right now. Keep track of how others use it. Try incorporating it into your routine and adjust as needed.

Have you tried it? What are your recommendations? Will it change your work routine?

LEAVE A COMMENT

Curated Collections

image/png

We are reading/watching

  • Three Observations from Sam Altman. I don’t usually analyze texts in this section. But here are a few highlights you can’t miss from Altman’s text:

    • “AGI is a weakly defined term, but generally speaking we mean it to be a system that can tackle increasingly complex problems, at human level, in many fields.”
    • “AGI is just another tool in this ever-taller scaffolding of human progress we are building together.”
    • Observations: 1. “The intelligence of an AI model roughly equals the log of the resources used to train and run it.” 2. “The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use.” 3. “The socioeconomic value of linearly increasing intelligence is super-exponential in nature.”
    • “We are now starting to roll out AI agents, which will eventually feel like virtual co-workers.”
    • “Many of us expect to need to give people more control over the technology than we have historically, including open-sourcing more, and accept that there is a balance between safety and individual empowerment that will require trade-offs.”
  • The End of Programming as We Know It by Tom O’Reilly

  • Must-see course about LLMs by Andrej Karpathy (+3 hours long video)

Top models to pay attention to

The freshest research papers, categorized for your convenience

There were quite a few super interesting research papers this week, we mark the ones we recommend the most with 🌟 in each section.

LLM Techniques and Optimizations

Reasoning and Multi-Step Problem Solving

Model Efficiency and Scaling

Alignment and Safety Improvements

Domain-Specific Applications of LLMs

Open-Source vs. Proprietary LLM Innovations

That’s all for today. Thank you for reading!


Please share this article to your colleagues if it can help them enhance their understanding of AI and stay ahead of the curve.

image/png

Community

Sign up or log in to comment