
Anthropic Launches Opus 4.7 AI Model, Focusing on Coding, Visual Tasks, and Cybersecurity Guardrails
- By John K. Waters
- 04/21/26
Anthropic has introduced Claude Opus 4.7, an updated large language design that it states outperforms its predecessor on software engineering tasks, image analysis, and multi-step self-governing work, while maintaining prices at $5 per million input tokens and $25 per million output tokens.
The model is now generally offered across Anthropic’s own items and through its API, along with on Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry.
Anthropic stated the upgrade delivers the most noticable gains on demanding coding jobs. Users report being able to hand off hard coding work that previously required close guidance, with the new design handling complex, long-running jobs with greater consistency and paying closer attention to instructions.
The company likewise said the design can validate its own outputs before reporting results compared to users, a behavior it referred to as brand-new relative to earlier versions.
On vision, Opus 4.7 can now accept images approximately 2,576 pixels on the long edge, approximately 3.75 megapixels, more than 3 times the resolution supported by prior Claude models.
Anthropic said this broadens the model’s effectiveness for jobs needing fine visual detail, including reading thick screenshots and extracting data from intricate diagrams.
Maybe the most notable aspect of the release is its function in Anthropic’s broader security rollout strategy. The business just recently announced Job Glasswing, which highlighted both the dangers and potential benefits of AI for cybersecurity, and mentioned that it would keep its more powerful Claude Mythos Sneak peek design limited while checking brand-new cyber safeguards on less-capable systems initially. Opus 4.7 is the very first such model.
Anthropic stated it experimented during training by selectively reducing Opus 4.7’s cybersecurity capabilities and is launching the design with automatic safeguards created to find and block demands that indicate restricted or high-risk cybersecurity uses.
The business added that findings from this release will notify its eventual more comprehensive release of what it calls “Mythos-class” designs. Security specialists looking for to use the brand-new design for genuine functions, such as vulnerability research study or penetration screening, can use through a new Cyber Confirmation Program.
Relating to positioning, Anthropic’s examinations show that Opus 4.7 displays low rates of concerning habits, such as deception, sycophancy, and cooperation with misuse, and carries out better than its predecessor in sincerity and resistance to destructive prompt-injection attacks. However, the business acknowledged the model is modestly weaker in some areas, including a propensity to give excessively detailed harm-reduction advice on illegal drugs.
Anthropic’s internal positioning assessment described the model as “largely well-aligned and reliable, though not totally ideal in its behavior,” and noted that Mythos Sneak peek remains the best-aligned model the company has trained.
Developers updating from Opus 4.6 ought to account for 2 cost-related changes. Opus 4.7 uses an upgraded tokenizer that can map the same input to roughly 1.0 to 1.35 times as many tokens, depending upon content type. The design also produces more output tokens at greater effort levels, particularly in later turns of agentic jobs, because it engages in more thinking.
Anthropic stated users can manage token intake through an effort criterion, task budgets, or by triggering the model to be more succinct.
Together with the model release, Anthropic introduced a new “xhigh” effort level, sitting in between the existing “high” and “max” settings, providing designers finer control over the tradeoff between reasoning depth and latency. In Claude Code, the default effort level has been raised to “xhigh” for all plans.
The business also released task budget plans in public beta on its API platform, and added a brand-new “/ ultrareview” command in Claude Code that reviews code modifications and flags bugs and design concerns.
For more details, go to the Anthropic website.
About the Author
John K. Waters is the editorial director of a variety of Converge360.com websites, with a focus on high-end advancement, AI and future tech. He’s been writing about cutting-edge innovations and culture of Silicon Valley for more than twenty years, and he’s written more than a lots books. He likewise co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at [email safeguarded]