London: OpenAI has launched a new feature called deep research, aimed at professionals needing structured, in-depth investigations. Available to ChatGPT Pro users, it synthesises information from various sources but requires user caution regarding source credibility and AI-generated data reliability.
OpenAI has unveiled a new feature in its ChatGPT suite called deep research, which is designed to support users in conducting structured, multi-step investigations. This advancement is particularly geared towards professionals across various sectors, including finance, science, and policy analysis, facilitating more sophisticated research workflows that involve synthesising information from multiple sources into comprehensive reports.
The feature distinguishes itself from standard ChatGPT interactions by taking considerably longer to complete, with response times ranging from five to thirty minutes based on the complexity of the query. Deep research allows users to generate in-depth findings for tasks such as policy analysis, technical market reports, and significant purchase comparisons. Currently, this tool is accessible exclusively to ChatGPT Pro users who can submit up to 100 queries each month. OpenAI has expressed intentions to broaden access to Plus, Team, and Enterprise users soon, although no specific timeline has been indicated for markets including the UK, Switzerland, or the European Economic Area.
Users activate deep research within the ChatGPT interface by selecting it in the message composer and can enhance their queries by attaching files like spreadsheets or PDFs for additional context. Upon initiation, the AI autonomously scans the internet, interprets relevant data, and compiles a structured response, with a sidebar feature that allows users to track the step-by-step research process undertaken by the AI.
In its initial phase, deep research outputs will be text-only, but OpenAI has confirmed that future enhancements will incorporate visual elements such as images and charts. The technology that powers deep research is based on OpenAI’s specialised o3 model, optimised specifically for research-intensive tasks that require extensive analysis and multi-step reasoning.
OpenAI has declared that deep research has been improved with reinforcement learning techniques to refine its performance on real-world research tasks. However, the company has cautioned users about potential pitfalls, noting that the AI may struggle to distinguish between credible sources and misinformation, and it may not accurately express uncertainty or produce consistent citation formats.
The competitive landscape for AI research assistants has intensified with the announcement of a similar tool by Google, also named Deep Research, which is currently in the experimental phase but being tested by Gemini Advanced users. This initiative is part of Google’s broader Project Mariner, aimed at developing AI agents capable of autonomously browsing the web and creating structured research documents, highlighting a significant trend in the evolution of automated research tools.
Performance benchmarks have placed OpenAI’s deep research tool ahead of several competitors. In tests that assessed its capabilities against various AI models, it achieved a score of 26.6% on an expert-level exam, a significant lead over models such as Google’s Gemini Thinking, which scored 6.2%, and OpenAI’s own GPT-4o that recorded only 3.3%.
Despite its promising features, OpenAI has acknowledged limitations surrounding the accuracy of deep research, urging users to manually verify the credibility of the information generated. The tool currently restricts access to publicly available internet content and user-uploaded files, lacking the ability to leverage proprietary databases or subscription-only academic journals, which may hinder its utility in specific professional contexts.
Security considerations remain a concern, as AI-generated misinformation has been an issue with previous models. OpenAI’s deep research is not exempt from such vulnerabilities, and recent security audits of rival AI systems have raised alarm over the risk of producing misleading information.
Looking ahead, OpenAI has signalled that deep research is an initial step towards more advanced AI-assisted tools capable of conducting in-depth investigations independently. The current restrictions on access are due to the high resource demands of the feature, but OpenAI plans to extend availability to more user tiers and additional platforms, including mobile and desktop applications in the near future.
Concurrent developments from competitors like Google and Microsoft suggest that automated research capabilities are rapidly becoming a critical area of focus in the AI sector. As the integration of deep research with other tools gradually advances, OpenAI is positioning its offerings to potentially streamline and enhance professional workflows in various disciplines, even as the fundamental question of whether AI can replicate the nuanced and critical evaluations performed by human researchers continues to be deliberated.
The presence of tools such as deep research advocates for a future where AI not only assists but actively contributes to the research process, transforming how information is gathered and synthesized in professional contexts. Nonetheless, the role of human oversight and critical judgement remains essential as the capabilities of AI continue to evolve.
Source: Noah Wire Services