
Windsurf SWE 1.5 and Cursor Composer-1: Two new coding models you should be aware of
Nov 7, 2025
Nov 7, 2025
Over the past 2 weeks, we have gotten two new coding models from two well-known AI companies, Cursor and Windsurf.
Both are tightly integrated into their respective environments and can only be accessed through the Cursor and Windsurf editors, meaning they’re currently off-limits to third-party tools like VS Code. I guess it’s time to add “install another IDE” to your to-do list.
Composer-1
Built by Cursor (released as part of Cursor 2.0) as Cursor’s first native agent-coding model.
Instead of relying on static datasets or artificial benchmarks, it learned directly from real-world software engineering tasks using real development tools.
Designed for software engineering intelligence and speed, the report claims that this model completes most turns under 30 seconds and is 4 times faster than similar models.

What it brings
When the model integrates with the editor and can act on the actual repo, it can provide meaningful edits rather than generic suggestions. This means less context-switching.
It can plan, make changes, and edit several files at once, and suggest specific updates. It follows your rules, fixes imports, updates tests and documents, and reduces errors with filenames or configurations compared to general chat models.
It has the potential to reduce friction in code chores such as refactoring, testing, and cleanup.
What to watch / limitations
Lock-in: As mentioned above, Composer-1 is only available via Cursor’s environment (not yet a general open API), so if you’re already using another toolchain or models, shifting may have a cost.
Benchmark transparency: While Cursor makes big claims, there are few verifications to support them.
Broader intelligence: Because it’s specialised, it might not handle tasks as well (e.g., documentation generation, out-of-domain languages/frameworks) as more general models, as mentioned in their forum as well.
Cost and value: The model is priced similarly to frontier models (Gemini 2.5 Pro and GPT-5); you’ll need to assess whether the specialization is worth it in your context.
SWE-1.5 (Windsurf)
Built by Windsurf, described as a “frontier-size model with hundreds of billions of parameters that achieves near-SOTA coding performance.”
Speed is a major claim (950 tokens/second generation speed, 6x faster than Haiku 4.5 and 13x faster than Sonnet 4.5).
Like Composer, SWE-1.5 integrates into the Windsurf editor/IDE environment only.
What it brings
High speed and high performance: You get great code generation and super low latency. For developers, that means less waiting, more flow.
Engineering-friendly training: Because the model is trained on software-engineering contexts (with the help of senior engineers and open-source maintainers to ensure the model learns from high-quality examples across many languages and frameworks), it knows what to code and how to code, too.
Workflow integration: When the model is embedded in the IDE (Windsurf in this case), it can reduce context switches: you’re coding, invoking the assistant, staying in one environment.

What to watch / limitations
Editor ecosystem again: Since SWE-1.5 is tied to Windsurf’s environment/plugin ecosystem, adopting it can be a hassle if your team uses a different IDE or has established toolchains.
Transparency and benchmark context: While the speed claims are substantial (950 tok/s), it is always good to dig into how they translate into real-world tasks rather than relying on limited, closed benchmarks.
Cost, access, and availability: As we mentioned above, the model is available only via Windsurf’s subscription or enterprise plan, and cost vs. benefit will be key.
Which one to choose - SWE 1.5 or Composer?
Based on our testing and various sources, SWE-1.5 ranks first among the two in terms of performance, with better code quality and faster generation speed. In conclusion, both models are good, but models like GPT-5 Codex and Claude Sonnet 4.5 are superior.
In terms of cost, they are similar to top models like GPT-5 and Codex. Now it comes down to you: whether your workflow benefits more from speed and workspace integration or from the broader insights that come with pairing these tools with smarter code-review platforms like Entelligence.ai.
Useful Links:https://cursor.com/blog/composer
Over the past 2 weeks, we have gotten two new coding models from two well-known AI companies, Cursor and Windsurf.
Both are tightly integrated into their respective environments and can only be accessed through the Cursor and Windsurf editors, meaning they’re currently off-limits to third-party tools like VS Code. I guess it’s time to add “install another IDE” to your to-do list.
Composer-1
Built by Cursor (released as part of Cursor 2.0) as Cursor’s first native agent-coding model.
Instead of relying on static datasets or artificial benchmarks, it learned directly from real-world software engineering tasks using real development tools.
Designed for software engineering intelligence and speed, the report claims that this model completes most turns under 30 seconds and is 4 times faster than similar models.

What it brings
When the model integrates with the editor and can act on the actual repo, it can provide meaningful edits rather than generic suggestions. This means less context-switching.
It can plan, make changes, and edit several files at once, and suggest specific updates. It follows your rules, fixes imports, updates tests and documents, and reduces errors with filenames or configurations compared to general chat models.
It has the potential to reduce friction in code chores such as refactoring, testing, and cleanup.
What to watch / limitations
Lock-in: As mentioned above, Composer-1 is only available via Cursor’s environment (not yet a general open API), so if you’re already using another toolchain or models, shifting may have a cost.
Benchmark transparency: While Cursor makes big claims, there are few verifications to support them.
Broader intelligence: Because it’s specialised, it might not handle tasks as well (e.g., documentation generation, out-of-domain languages/frameworks) as more general models, as mentioned in their forum as well.
Cost and value: The model is priced similarly to frontier models (Gemini 2.5 Pro and GPT-5); you’ll need to assess whether the specialization is worth it in your context.
SWE-1.5 (Windsurf)
Built by Windsurf, described as a “frontier-size model with hundreds of billions of parameters that achieves near-SOTA coding performance.”
Speed is a major claim (950 tokens/second generation speed, 6x faster than Haiku 4.5 and 13x faster than Sonnet 4.5).
Like Composer, SWE-1.5 integrates into the Windsurf editor/IDE environment only.
What it brings
High speed and high performance: You get great code generation and super low latency. For developers, that means less waiting, more flow.
Engineering-friendly training: Because the model is trained on software-engineering contexts (with the help of senior engineers and open-source maintainers to ensure the model learns from high-quality examples across many languages and frameworks), it knows what to code and how to code, too.
Workflow integration: When the model is embedded in the IDE (Windsurf in this case), it can reduce context switches: you’re coding, invoking the assistant, staying in one environment.

What to watch / limitations
Editor ecosystem again: Since SWE-1.5 is tied to Windsurf’s environment/plugin ecosystem, adopting it can be a hassle if your team uses a different IDE or has established toolchains.
Transparency and benchmark context: While the speed claims are substantial (950 tok/s), it is always good to dig into how they translate into real-world tasks rather than relying on limited, closed benchmarks.
Cost, access, and availability: As we mentioned above, the model is available only via Windsurf’s subscription or enterprise plan, and cost vs. benefit will be key.
Which one to choose - SWE 1.5 or Composer?
Based on our testing and various sources, SWE-1.5 ranks first among the two in terms of performance, with better code quality and faster generation speed. In conclusion, both models are good, but models like GPT-5 Codex and Claude Sonnet 4.5 are superior.
In terms of cost, they are similar to top models like GPT-5 and Codex. Now it comes down to you: whether your workflow benefits more from speed and workspace integration or from the broader insights that come with pairing these tools with smarter code-review platforms like Entelligence.ai.
Useful Links:https://cursor.com/blog/composer
Your questions,
Your questions,
Decoded
Decoded
What makes Entelligence different?
Unlike tools that just flag issues, Entelligence understands context — detecting, explaining, and fixing problems while aligning with product goals and team standards.
Does it replace human reviewers?
No. It amplifies them. Entelligence handles repetitive checks so engineers can focus on architecture, logic, and innovation.
What tools does it integrate with?
It fits right into your workflow — GitHub, GitLab, Jira, Linear, Slack, and more. No setup friction, no context switching.
How secure is my code?
Your code never leaves your environment. Entelligence uses encrypted processing and complies with top industry standards like SOC 2 and HIPAA.
Who is it built for?
Fast-growing engineering teams that want to scale quality, security, and velocity without adding more manual reviews or overhead.

What makes Entelligence different?
Does it replace human reviewers?
What tools does it integrate with?
How secure is my code?
Who is it built for?




Refer your manager to
hire Entelligence.
Need an AI Tech Lead? Just send our resume to your manager.



